Navigating the Future of AI Under India's Data Protection Law
Artificial Intelligence is transforming how organizations process personal data. From recommendation engines to large language models, AI systems present unique compliance challenges under DPDPA.
This module explores the cutting-edge intersection of AI and data protection, addressing questions that every modern Data Fiduciary must consider:
Note: While DPDPA doesn't explicitly mention "AI" or "machine learning," its principles fully apply to AI systems. This module provides practical guidance for AI compliance under India's data protection regime.
AI systems process personal data across multiple stages, each with unique DPDPA implications:
Use Case: "ShopSmart" builds AI to recommend products to users
| AI Lifecycle Stage | Data Processing Activity | DPDPA Compliance Action |
|---|---|---|
| 1. Data Collection | Collect browsing history, past purchases, demographic data |
• Obtain consent for "personalized recommendations" • Privacy notice explaining data use in ML • Clear opt-out mechanism |
| 2. Data Preparation | Clean data, handle missing values, create feature vectors |
• Remove excessive attributes (data minimization) • Anonymize test/dev datasets • Document data transformations |
| 3. Model Training | Train collaborative filtering model on historical purchases |
• Verify purpose compatibility (recommendations ✓, not credit scoring ✗) • Implement training data retention policy • Secure training environment |
| 4. Model Validation | Test model accuracy, check for bias |
• Test accuracy across demographic groups • Identify potential discrimination (e.g., gender-based recommendations) • Document validation results |
| 5. Deployment | Deploy model to production servers |
• Encryption for model files • Access controls (who can modify model) • Version control and audit logging |
| 6. Inference | Generate real-time recommendations for users |
• Honor user consent choices • Respect withdrawal (stop personalization) • Allow users to see/correct data influencing recommendations |
| 7. Monitoring | Track model performance, detect drift |
• Monitor for accuracy degradation • Detect and respond to bias emergence • Log unusual patterns (potential breach indicator) |
| 8. Decommissioning | Retire old model, delete training data |
• Delete training data per retention policy • Securely destroy model artifacts • Document decommissioning |
Right to Grievance Redressal (Section 13):
Right to Erasure/Deletion (Section 12):
Right to Access (Section 11):
When a Data Principal exercises right to erasure, simply deleting their data from the database doesn't remove its influence from an already-trained model.
Approaches:
DPDPA Implication: Organizations should document their approach to data deletion in AI contexts and ensure it's effective, not just symbolic.
DPDPA's data minimization principle (Section 4(1)(b)) requires collecting only data "necessary" for a specified purpose. However:
DPDPA Perspective: "Better model performance" alone doesn't justify excessive data collection. Organizations must demonstrate necessity, not just utility.
Approach: Instead of using all available data, identify truly predictive features.
Excessive Features (Non-Compliant):
Necessary Features (DPDPA-Compliant):
Key Insight: The minimized model may have 2-3% lower accuracy, but uses 80% less personal data. This trade-off is required by DPDPA.
| Technique | How It Works | DPDPA Benefit | Use Case |
|---|---|---|---|
| Federated Learning | Train model across decentralized devices without collecting raw data centrally | Personal data never leaves user's device | Google Keyboard learns typing patterns without seeing your messages |
| Differential Privacy | Add carefully calibrated noise to data/model to prevent identification of individuals | Can share aggregate insights without exposing individuals | Apple's usage analytics with privacy guarantee |
| Homomorphic Encryption | Perform computations on encrypted data without decrypting | Third party can train model without seeing plaintext data | Healthcare: Train on patient records without exposing medical data |
| Synthetic Data Generation | Create artificial datasets that mimic statistical properties of real data | Use for testing/development without real personal data | Banking: Generate synthetic transactions for fraud detection model testing |
| Secure Multi-Party Computation | Multiple parties jointly compute function without revealing inputs to each other | Collaborative learning without data sharing | Banks collaborate on fraud detection without sharing customer data |
Anonymization: Personal data transformed so individuals cannot be re-identified (not covered by DPDPA if done properly)
Pseudonymization: Replace identifiers with pseudonyms (still personal data under DPDPA, but lower risk)
Scenario: Hospital wants to train disease prediction model using patient records.
Naive Approach (Not Anonymous):
Patient_ID | Age | Gender | Zipcode | Disease
P001 | 42 | M | 400001 | Diabetes
P002 | 42 | M | 400001 | Hypertension
❌ Problem: With just age, gender, and zipcode, individuals may be identifiable (quasi-identifiers)
K-Anonymous Approach:
Group | Age_Range | Gender | Zipcode_Prefix | Disease_Distribution
G1 | 40-50 | M | 4000** | 60% Diabetes, 40% Hypertension
G2 | 40-50 | F | 4000** | 55% Diabetes, 45% Hypertension
✅ Each group has k≥5 individuals, making re-identification difficult
Trade-off: Loss of granularity may reduce model accuracy, but gains privacy protection
Ask these questions for each data element:
Purpose: Recommend movies/shows to users
| Data Element | Necessary? | Reasoning |
|---|---|---|
| Watch history (titles, genres) | ✅ YES | Directly relevant: Past viewing predicts future preferences |
| Ratings provided by user | ✅ YES | Explicit signal of preferences |
| Time of day watched | ⚠️ MAYBE | Could improve recommendations (morning = news, night = movies) but not essential |
| Device type (mobile/TV/laptop) | ⚠️ MAYBE | Minor relevance (content format preference) but weak necessity |
| GPS location during watching | ❌ NO | Not necessary for recommendations; excessive |
| Payment method details | ❌ NO | No relevance to content preferences |
| Social media profiles | ❌ NO | Purpose creep; not needed for streaming recommendations |
Section 5 + Rule 3 require clear notice. For AI systems, this should include:
"We use technology and algorithms to improve your experience
and provide personalized content."
Problems:
AUTOMATED DECISION-MAKING NOTICE
We use machine learning algorithms to make the following automated
decisions:
1. CONTENT RECOMMENDATIONS
- What it does: Suggests products based on your browsing and purchase history
- Data used: Products viewed, added to cart, purchased; search queries
- Impact: Personalized homepage and email recommendations
- Human review: You can request manual curation by contacting support
2. FRAUD DETECTION
- What it does: Automatically flags suspicious transactions
- Data used: Transaction amount, location, frequency, device fingerprint
- Impact: May block transaction pending verification
- Human review: Decisions are reviewed by fraud team within 1 hour
3. DYNAMIC PRICING
- What it does: Adjusts prices based on demand and inventory
- Data used: Aggregated browsing data (not individual user data)
- Impact: Prices may vary by +/- 10%
- Human review: Pricing algorithms audited quarterly
You can opt out of personalized recommendations in Settings → Privacy.
You cannot opt out of fraud detection (legal obligation).
For questions: privacy@company.com
Why This Works:
Consent REQUIRED (Section 6):
Consent NOT Required - Legitimate Use (Section 7):
Scenario: Digital bank uses AI for multiple purposes
Purpose 1: Fraud Detection AI (No Consent Needed)
Notice to Customer:
"We use AI to detect fraudulent transactions in real-time to protect
your account. This processing is necessary for security and is a
legal obligation. No consent is required."
Purpose 2: Investment Recommendation AI (Consent Required)
Consent Request:
"We'd like to use AI to analyze your transaction history and provide
personalized investment recommendations. This is optional and not
required for your banking services.
[ ] Yes, analyze my data for investment recommendations
[ ] No, I don't want personalized investment advice
You can change this choice anytime in Settings."
Purpose 3: Credit Scoring AI (Legitimate Interest, but High Risk)
Notice + Opportunity to Object:
"When you apply for a loan, we use an AI model to assess creditworthiness
based on your income, employment, and repayment history. This is necessary
to process your application.
The AI decision will be reviewed by a loan officer before final approval.
You have the right to request human-only review without AI involvement.
[ ] I want human-only credit assessment (may take 2-3 extra days)"
1. Just-in-Time Notice
Show AI notice when user first encounters the AI feature, not buried in privacy policy.
[User searches for "running shoes"]
---
💡 TIP: We use AI to personalize search results based on your
browsing history. [Learn More] [Disable Personalization]
---
[Search Results]
2. Layered Transparency
Provide both short notice and detailed explanation.
Short Version (Always Visible):
"AI-powered recommendations based on your history"
Detailed Version (Click "Learn More"):
- Data used: Last 90 days browsing, purchases
- How it works: Collaborative filtering algorithm
- Accuracy: 78% match rate in testing
- Privacy: Data never shared with third parties
- Control: [Disable] [View my data] [Request manual curation]
3. Explanation on Demand
Allow users to ask "Why did the AI recommend this?"
[Recommended: Wireless Headphones - $99]
[Why this recommendation?]
→ Clicked: Because you viewed wireless headphones 3 times
→ Clicked: People who bought your recent phone also bought this
→ Clicked: High ratings (4.5/5) match your preference for quality products
While DPDPA doesn't explicitly mention "algorithmic bias," several provisions implicitly require fair AI:
| Bias Type | Definition | Example | Mitigation |
|---|---|---|---|
| Historical Bias | Training data reflects past discrimination | Loan approval AI trained on historical data where women were systematically denied loans | Identify protected attributes; adjust training data distribution |
| Representation Bias | Training data doesn't represent all groups equally | Facial recognition AI trained mostly on light-skinned faces performs poorly on dark-skinned faces | Ensure diverse, representative datasets |
| Measurement Bias | Features chosen are proxies for protected attributes | Using zipcode as feature effectively encodes race/income due to residential segregation | Avoid proxy variables; test for disparate impact |
| Aggregation Bias | One model for all groups when different groups have different patterns | Medical AI that works for men but not women because gender-specific patterns ignored | Group-specific models or fairness constraints |
| Evaluation Bias | Testing doesn't cover all demographic groups | AI tested only on young users performs poorly for elderly | Stratified testing across demographics |
Common fairness definitions (note: satisfying all simultaneously is mathematically impossible!):
Company: TechRecruit India (hypothetical)
AI System: Automated resume screening for software engineer positions
The Problem:
Bias Detection:
Performance by Gender:
Male candidates: 75% pass screening → 60% hired → Precision: 80%
Female candidates: 40% pass screening → 50% hired → Precision: 125% (!)
❌ Failed demographic parity (75% vs 40% pass rate)
❌ Failed equalized odds (different TPR)
⚠️ Paradox: Female candidates who pass have HIGHER hiring rate (algorithm over-compensates, screens too aggressively)
DPDPA Violations:
Remediation:
While DPDPA doesn't mandate "explainability," practical compliance often requires it:
| Technique | What It Does | Use Case | Compliance Benefit |
|---|---|---|---|
| LIME (Local Interpretable Model-agnostic Explanations) |
Explains individual predictions by approximating model locally with interpretable model | Loan rejection: "Denied due to: debt ratio (60%), recent inquiries (25%), short credit history (15%)" | Enables correction requests; shows logic to Data Principals |
| SHAP (SHapley Additive exPlanations) |
Assigns each feature an importance value for a prediction | Credit score: Income (+50), Employment (+30), Debt (-20) → Final Score: 720 | Demonstrates fairness (no protected attribute influence); supports accuracy claims |
| Attention Mechanisms | Shows which input parts model focused on (for text/images) | Resume screening: Model focused on "Python, ML, 5 years" keywords | Proves relevance of data used; helps detect proxy discrimination |
| Counterfactual Explanations | Shows what change would flip the decision | "If income were ₹50K higher, loan would be approved" | Helps Data Principals understand path to favorable outcome |
| Model Cards | Standardized documentation of model's intended use, training data, performance | Document: Fraud detection model, 95% accuracy, tested on diverse demographics | Organizational accountability; audit trail; demonstrates due diligence |
| DPDPA Obligation | AI-Specific Interpretation | Practical Implementation |
|---|---|---|
| Section 8(2) Purpose Limitation |
Cannot train AI for one purpose, then use for another without new consent | • Separate models for each purpose • Technical controls preventing purpose drift • Audit logs of model usage |
| Section 8(3) Data Accuracy |
Training data must be accurate; model outputs must be accurate | • Data quality checks before training • Model validation on held-out test sets • Monitoring for model drift/degradation |
| Section 8(4) Security Safeguards |
Protect training data, model parameters, inference APIs | • Encrypted model storage • Access controls on training data • API rate limiting & authentication • Model theft prevention |
| Section 8(5) Retention Limitation |
Delete training data when purpose served; decommission old models | • Training data retention policy • Model versioning with deletion schedule • Automated cleanup processes |
| Section 8(6) Breach Notification |
Model theft, training data leaks, adversarial attacks are breaches | • Monitor for model extraction attempts • Detect data poisoning attacks • Incident response for ML-specific threats |
1. Model Extraction Attacks
2. Membership Inference Attacks
3. Data Poisoning
4. Adversarial Examples
5. Model Inversion
Governance Elements:
Standardized documentation for every production AI model:
AI MODEL CARD: Credit Risk Scoring Model v2.3
MODEL DETAILS
- Model Type: Gradient Boosted Trees (XGBoost)
- Purpose: Assess creditworthiness for personal loans
- Owner: Credit Risk Team (risk@bank.com)
- Last Updated: 01-Dec-2024
- Next Review: 01-Mar-2025
TRAINING DATA
- Dataset: 500,000 loan applications (2019-2023)
- Features: 25 attributes (income, employment, credit history, debt ratio)
- Excluded Features: Gender, caste, religion, marital status (protected)
- Data Quality: 98.5% complete, 2% missing values imputed
- Geographic Coverage: All India (urban & rural)
PERFORMANCE METRICS
- Overall Accuracy: 87.3%
- Precision: 84.1%
- Recall: 89.6%
- AUC-ROC: 0.91
FAIRNESS EVALUATION
- Gender: Male (86.8% acc), Female (87.9% acc) - PASS
- Age: <30 (85.2%), 30-50 (88.1%), 50+ (86.5%) - PASS
- Geography: Urban (88.0%), Rural (86.1%) - Minor disparity, acceptable
- Caste: Not tested (data not collected per law)
KNOWN LIMITATIONS
- Performance degrades for self-employed applicants (75% accuracy)
- Limited training data for first-time borrowers (< 1000 samples)
- May be less accurate in high-inflation periods (training during stable economy)
DPDPA COMPLIANCE
✓ Purpose limited: Only used for loan decisions
✓ Consent obtained: Applicants consent to automated assessment
✓ Human review: All rejections reviewed by loan officer
✓ Explainability: SHAP values provided for each decision
✓ Opt-out: Applicants can request manual-only assessment
✓ Security: Model encrypted, access logged, API authenticated
ETHICAL CONSIDERATIONS
- Regularly monitored for emerging bias
- Independent audit every 6 months
- Feedback mechanism for disputed decisions
CONTACT
- Model Owner: Priya Sharma (priya@bank.com)
- DPO: dpo@bank.com
- AI Ethics Committee: ethics@bank.com
Generative AI (ChatGPT, DALL-E, Gemini, etc.) presents unique compliance challenges:
| Scenario | DPDPA Question | Analysis | Compliance Action |
|---|---|---|---|
| Company uses ChatGPT for customer support | Is sending customer queries to OpenAI a data transfer? | YES - Customer messages contain personal data; OpenAI is Data Processor | • DPA with OpenAI • Customer consent for AI use • Check OpenAI's India data residency |
| Employee pastes internal document into Claude | Is this a personal data breach? | MAYBE - If document contains customer/employee data, yes; if generic business data, no | • Employee training on GenAI use • Policy: Don't paste personal data • Consider enterprise ChatGPT with no training on inputs |
| GenAI generates fake customer review | Is fabricated personal data covered by DPDPA? | NO - Completely fictional data about non-existent person isn't "personal data" BUT - Could be fraud/misrepresentation under other laws |
• Don't generate fake reviews (other legal issues) • Disclose AI-generated content |
| Training proprietary LLM on customer emails | Can we use customer emails to train our AI? | REQUIRES CONSENT - Customer emails contain personal data; training is processing beyond original purpose | • Obtain specific consent for AI training • Or anonymize emails before training • Or use synthetic data instead |
| ChatGPT regurgitates memorized personal data | Who is liable - OpenAI or the user? | COMPLEX - If user is Data Fiduciary who sent data to OpenAI (processor), user is liable. If ChatGPT learned from public web, OpenAI is controller for that data. | • Users: Don't share others' personal data with GenAI • GenAI providers: Implement filters to prevent PII regurgitation |
Option 1: Use Public GenAI with Safeguards
Option 2: Self-Hosted Open Source Models
Option 3: Hybrid - Fine-Tuning Base Models
❌ Privacy-Risky Prompt:
"Analyze this customer complaint email and draft a response:
From: rajesh.kumar@email.com
Date: 10-Dec-2024
Subject: Account Issue
Hi, my account number 1234-5678-9012 has an incorrect address.
It shows Flat 301, Green Tower, Andheri, Mumbai 400053, but I
moved to Bangalore. Please update to #45, MG Road, Bangalore 560001.
My Aadhaar is 1234-5678-9012 for verification.
Thanks, Rajesh Kumar | +91-98765-43210"
⚠️ Problems: Full name, email, phone, Aadhaar, account number, old & new addresses all sent to GenAI!
✅ Privacy-Safe Prompt:
"Draft a response to a customer who reported an incorrect address
on their account. The customer has provided valid ID for verification.
Tone should be: professional, apologetic, helpful.
Template response should:
1. Acknowledge the issue
2. Confirm we'll update the address
3. Explain timeline (24-48 hours)
4. Provide contact for further issues"
✅ No personal data sent; GenAI creates template; human fills in specifics
The Controversy:
DPDPA Implications:
Emerging Solutions:
Use Case: "ShopBot" - AI assistant on shopping website
DPDPA Compliance Checklist:
While DPDPA provides data protection requirements, comprehensive AI governance requires considering global risk frameworks:
These frameworks help organizations manage AI risks beyond just data protection (safety, reliability, ethics, societal impact).
| Risk Level | Definition | Examples | Requirements |
|---|---|---|---|
| UNACCEPTABLE RISK |
AI that poses clear threat to safety, livelihoods, or rights | • Social scoring by governments • Subliminal manipulation • Real-time biometric ID in public (with exceptions) • Emotion recognition in workplace/education |
PROHIBITED Cannot be deployed |
| HIGH RISK | AI in critical domains affecting fundamental rights | • AI in medical devices • Critical infrastructure • Education/exam scoring • Employment (hiring, promotion) • Law enforcement • Migration/asylum • Credit scoring |
• Conformity assessment • Technical documentation • Human oversight • Data governance • Accuracy, robustness testing • Cybersecurity • Risk management system |
| LIMITED RISK | AI requiring transparency | • Chatbots • Deepfakes • Emotion recognition (outside workplace/education) • Biometric categorization |
• Disclosure that AI is being used • Label AI-generated content • Transparency obligations only |
| MINIMAL RISK | All other AI | • Spam filters • Video game AI • Inventory management • Recommendation engines (low stakes) |
• No specific obligations • Voluntary codes of conduct encouraged |
Risk Assessment Matrix for DPDPA:
| Factor | Questions to Ask | High Risk Indicators |
|---|---|---|
| 1. Volume of Personal Data | How much personal data does AI process? | • Millions of data subjects • Large-scale processing (SDF threshold) |
| 2. Sensitivity of Data | What type of data is processed? | • Health data • Financial data • Biometric data • Children's data |
| 3. Automated Decision Impact | What does the AI decide and what are consequences? | • Legal rights affected (credit, employment, insurance) • Irreversible decisions • No human override |
| 4. Potential for Discrimination | Could AI treat groups unfairly? | • Uses protected attributes • Historical bias in training data • Lack of fairness testing |
| 5. Transparency & Explainability | Can decisions be explained? | • Black box model • No XAI techniques • Users can't understand why AI decided |
| 6. Correction Difficulty | Can wrong decisions be fixed? | • No appeal mechanism • Machine unlearning impossible • No data correction process |
Risk Scoring:
Phase 1: Design (Before Building AI)
Phase 2: Development (Building AI)
Phase 3: Validation (Before Deployment)
Phase 4: Deployment (Going Live)
Phase 5: Monitoring (Ongoing)
Phase 6: Retraining / Decommissioning
System: "DiagnoseAI" - App that suggests possible diagnoses based on symptoms
Risk Assessment:
| Risk Factor | Score | Analysis |
|---|---|---|
| Data Volume | HIGH | 1 million users, SDF likely |
| Data Sensitivity | HIGH | Health data (symptoms, medical history) |
| Decision Impact | HIGH | Health decisions; wrong diagnosis could harm |
| Discrimination Potential | MEDIUM | Could be biased by demographics in training data |
| Transparency | MEDIUM | Medical AI often black-box; explainability challenging |
| Correction Difficulty | HIGH | User may act on AI suggestion before correction possible |
Result: VERY HIGH RISK (5/6 high-risk factors)
Required Safeguards:
Test your understanding of AI & DPDPA compliance
A company trains a recommendation AI on customer purchase history. Two years later, they want to use the same model for credit risk assessment. Under DPDPA, can they do this?
Correct Answer: B) No - violates purpose limitation (Section 8(2))
Explanation: Section 8(2) requires processing for specified purposes only. "Product recommendations" and "credit risk assessment" are fundamentally different purposes with different impacts on Data Principals. Using data collected for recommendations to make credit decisions requires new consent or clear legitimate basis. This is a classic example of purpose creep that DPDPA prohibits. The company must obtain fresh consent for credit scoring or collect new data specifically for that purpose.
An ML engineer argues: "Removing 30% of features reduces model accuracy from 92% to 89%. We should keep all features." From a DPDPA perspective, what's the correct response?
Correct Answer: C) Conduct necessity test - is marginal accuracy gain worth privacy cost?
Explanation: DPDPA's data minimization principle (Section 4) requires collecting only "necessary" data. Necessity isn't determined solely by technical performance - it requires balancing utility against privacy intrusion. A 3% accuracy gain from 30% more features may not meet the necessity threshold, especially if the removed features are sensitive. The correct approach is: (1) Assess what the 30% of features are (sensitive? proxies for protected attributes?), (2) Evaluate the real-world impact of 3% accuracy difference, (3) Consider alternatives like synthetic data or privacy-preserving techniques. "More data = better model" is not DPDPA-compliant reasoning.
A banking app uses AI for (a) fraud detection and (b) personalized investment recommendations. Which requires consent?
Correct Answer: D) Only investment recommendations require consent
Explanation: Section 7 allows processing without consent for "necessary" purposes including compliance with law and performance of contract. Fraud detection is necessary for security and arguably a legal obligation, falling under legitimate use. However, personalized investment recommendations are value-added services not essential for basic banking. They involve profiling and automated decision-making that affect users financially, requiring explicit consent under Section 6. The bank must offer basic banking without requiring consent to investment AI, as bundling would violate Section 6(4).
Your loan approval AI has 87% accuracy overall. Testing shows: Men - 88% accuracy, Women - 83% accuracy. Is this DPDPA-compliant?
Correct Answer: D) Unclear - need to investigate if difference is statistically significant and systematic
Explanation: A 5% accuracy gap between genders is a red flag but not automatically non-compliant. You must investigate: (1) Is the difference statistically significant given sample sizes? (2) What's causing it - biased training data? Proxy variables? Different underlying distributions? (3) Does this violate constitutional equality (Article 14)? (4) Is the lower accuracy causing systematic harm to women? Even without gender as an explicit feature, proxy variables (e.g., employment type, industry) can encode gender. Section 8(3) requires accuracy, and systematically lower accuracy for a protected group likely violates this. The organization should implement fairness constraints, retrain with balanced data, or use separate models if needed. Simply having "good overall accuracy" doesn't excuse discriminatory outcomes.
Employees in your company are using ChatGPT (free version) to draft customer service emails by pasting customer complaints. Is this compliant?
Correct Answer: B) No - sends customer personal data to OpenAI without authorization
Explanation: This is a data breach scenario. Customer complaints likely contain personal data (names, emails, account details, possibly sensitive information about issues). Pasting this into ChatGPT free version means: (1) Data is sent to OpenAI's servers (third-party transfer), (2) OpenAI may use it to train future models (on free tier), (3) No Data Processing Agreement exists between your company and OpenAI, (4) Customers never consented to their data being sent to OpenAI. Your company is the Data Fiduciary; OpenAI would be a Data Processor, but without proper contracts this is unauthorized disclosure. Correct approach: (1) Use enterprise ChatGPT (Azure OpenAI) with no-training guarantee and DPA, (2) Anonymize complaints before using GenAI, or (3) Train employees not to paste customer data into public AI tools.
Rank these AI systems from HIGHEST to LOWEST DPDPA risk: (A) Spam filter, (B) Automated loan approval, (C) Product recommendation engine, (D) Medical diagnosis AI
Correct Answer: B) D > B > C > A
Explanation: Risk assessment considers: data sensitivity, decision impact, potential harm, volume, and reversibility.
(D) Medical Diagnosis AI - HIGHEST RISK: Health data (highly sensitive), life-or-death decisions, wrong diagnosis causes severe harm, falls under medical device regulations. Requires maximum safeguards.
(B) Automated Loan Approval - HIGH RISK: Affects legal rights (credit access), financial data, discriminatory potential, impacts livelihood. Requires DPIA if SDF.
(C) Product Recommendations - MEDIUM RISK: Personal data (browsing/purchase history), but reversible, low-stakes (user can ignore), primarily commercial impact.
(A) Spam Filter - LOWEST RISK: Necessary for service, minimal personal data processing, easily reversible (check spam folder), low stakes.
Risk level determines compliance intensity: Medical AI needs clinical validation, maximum transparency, human oversight; Spam filter needs basic DPDPA compliance only.
Congratulations! You've completed all 6 modules and are ready for the final examination. The future of AI in India depends on professionals like you who understand both technology and privacy law!
You've mastered all 6 modules covering India's Digital Personal Data Protection Act, 2023. Now it's time to test your knowledge and earn your certificate!
🎯 Take Final Examination90 questions | 120 minutes | 85% passing score