top of page

Nigeria’s Consumer Protection Laws Aren’t Ready for AI

ree

Artificial Intelligence is no longer a distant frontier; it’s already embedded in Nigeria’s fast-growing digital economy. In 2024, the fintech sector processed over 108 billion mobile money transactions worth $1.68 trillion, according to FITC Nigeria, much of it powered by AI systems that determine creditworthiness, automate hiring decisions, and assist with health diagnostics. These tools are becoming essential, but they also operate with limited transparency and virtually no consumer oversight. Despite AI’s growing influence, Nigeria’s legal frameworks haven’t caught up. Laws like the FCCPA and NDPR provide general protections but don’t address AI-specific risks such as bias, explainability, or the right to challenge automated decisions. While regulators have begun enforcing compliance with major fines issued to fintechs in 2024, current policies lack the structure needed to protect Nigerians from opaque algorithmic systems. Nigeria urgently needs to reform its consumer protection laws to ensure that innovation doesn't outpace accountability. Stronger, AI-aware policies are now critical to safeguarding digital rights in a fast-evolving economy.


How AI Is Already Affecting Nigerian Consumers

AI is already shaping how Nigerians access jobs, loans, and even healthcare, not through flashy announcements, but through quiet automation built into everyday digital services. From CV filtering to credit scoring and chatbot diagnoses, life-altering decisions are increasingly made by algorithms, often without the user’s awareness. In the job market, AI tools are now standard in large organizations and recruitment platforms. They automatically screen thousands of CVs, ranking candidates based on predicted performance. A recent survey found that 80% of Nigerian firms are exploring automated hiring. Most applicants don’t know their application may never reach a human, and have no way to challenge rejections. In fintech, companies like Fair Money use AI to assess creditworthiness based on mobile phone data, including call logs, SMS patterns, and app usage. This has helped expand access to credit for users without formal banking records. But it also introduces bias; someone could be denied a loan simply because of how they use their phone, not their actual financial behavior. In 2024, the CBN fined several fintechs, including Opay and Moniepoint, over ₦1 billion for weak KYC and compliance controls, many of which are linked to automated systems. In healthcare, chatbots like You and Your Health and AwaDoc offer 24/7 diagnostic support via WhatsApp, reducing the need for in-person consultations. But without regulation or clinical oversight, users are at risk of misdiagnosis or over-reliance on flawed advice, especially if the AI isn’t trained on Nigerian-specific medical data. Across all three sectors, the core issue remains the same: consumers are engaging with AI systems that influence critical outcomes, but they lack visibility, understanding, and recourse.


The Legal Gap

The systems making these decisions are advancing, but the rules that govern them are still stuck in the past. As AI tools quietly take over tasks once handled by humans, Nigeria’s legal frameworks offer little clarity on how they should operate or how consumers should be protected when things go wrong. The Federal Competition and Consumer Protection Act (FCCPA) provides important safeguards against unfair business practices, but it wasn’t built for automated systems. It makes no mention of algorithmic decision-making, and there’s no requirement for companies to tell consumers when AI is used. A job applicant filtered out by a resume-ranking algorithm, or a borrower rejected by a scoring model, has no legal right to an explanation or a path to contest the outcome. The Nigerian Data Protection Act (NDPA) attempts to regulate automated processing, but its protections are full of loopholes. While Section 27(g) restricts fully automated decisions, it allows exceptions for contractual or legal obligations, conditions broad enough to apply to most commercial use cases. In effect, companies can still deploy AI to make high-stakes decisions without human involvement or accountability. This leaves Nigerian consumers exposed. There is no transparency requirement; companies are not obligated to disclose when AI is used in decision-making. Consumers also lack the right to appeal or request human review, leaving them without recourse if they are harmed by automated outcomes. There are no mandatory audit requirements, meaning these systems can operate with bias or inaccuracies without oversight. And while sectors like lending and healthcare are clearly high-risk, Nigerian law has yet to formally classify or regulate them as such. A draft directive, the Guidance for AI Deployment (GAID), proposes algorithmic accountability, but it’s still optional and lacks enforcement. Until the law catches up, consumers will continue to engage with powerful systems that operate in the dark, with no protections if the outcome is wrong.

 

Global Precedents Are Already Setting the Standard

As artificial intelligence becomes more deeply integrated into decision-making systems, countries around the world are beginning to establish frameworks that reflect the risks it introduces particularly in consumer-facing sectors. While these efforts differ in scope and enforcement, they share a core, understanding: AI cannot operate in high-stakes environments without clear legal accountability. The European Union’s AI Act, which takes effect in 2025, is currently the most comprehensive approach globally. Rather than applying uniform rules, the Act introduces a risk-based classification system, identifying use cases like credit scoring, employment screening, and healthcare diagnostics as “high-risk.” For these systems, developers must implement mechanisms such as explainability, audit trails, bias mitigation, and human oversight. The regulation also places outright bans on certain uses, such as emotion recognition in the workplace and real-time biometric surveillance in public spaces, on ethical and human rights grounds. What stands out for Nigeria here is the principle of proportionality: not every AI application demands heavy oversight, but those that directly impact individual rights must be held to a higher standard. South Africa’s Protection of Personal Information Act (POPIA) reinforces this through a rights-based framework. Under Section 71, individuals are entitled to demand human intervention when a decision made solely by an automated system significantly affects their access to services, employment, or healthcare. POPIA also requires that organizations’ explain the logic behind such decisions and establish accessible redress mechanisms. This approach highlights the value of procedural fairness: regulation should not merely focus on how AI functions, but on ensuring that those impacted by it have a voice and a pathway to accountability. Kenya, though still in the early stages of formal AI regulation, has adopted a proactive stance. Its Office of the Data Protection Commissioner has begun reviewing the use of algorithms in public-sector programs, conducting pre-implementation impact assessments and engaging with stakeholders, including civil society and industry actors. Rather than wait for harm to materialize, Kenya is building a framework that addresses risk before it scales. This reflects a key regulatory lesson for emerging economies: governance must be anticipatory, not reactive. These models, though shaped by different legal traditions, offer valuable lessons for Nigeria. The EU demonstrates the importance of risk classification and enforceable safeguards. South Africa underscores the need for transparency and recourse mechanisms. Kenya shows that governance can begin before full legislation is in place. As Nigeria navigates its regulatory path, these examples suggest that effective AI oversight is not just possible, it is necessary, and increasingly urgent.


Turning Nigeria’s AI Vision Into Law

The draft National Artificial Intelligence Strategy (NAIS) signals an important shift in Nigeria’s policy approach to AI, grounding its ambitions in principles like transparency, fairness, and accountability. It outlines high-risk sectors, proposes a dedicated regulatory authority, and gestures toward a more structured AI future. For the first time, there’s a national framework that acknowledges AI as not just a tool for innovation, but a system with real consequences for rights, equity, and trust. Yet, as promising as it is, the NAIS remains a vision on paper. Without legislative weight, its ethical pillars lack enforceability, its regulatory ideas remain aspirational, and its consumer protections are still out of reach. In the face of AI systems already shaping access to jobs, credit, and healthcare, strategy alone is not enough. The urgency now lies in translating intention into law. This requires targeted legal reform updates to the FCCPA and NDPA to reflect how AI influences decision-making across key sectors, and the swift passage of the AI Bill to establish binding obligations around high-risk systems. But legislation cannot function in isolation. It must be matched by institutional capacity: regulators need both technical expertise and political support to carry out algorithmic audits, enforce transparency standards, and intervene when harm occurs. At the same time, governance must center the individual. Nigerians need to be informed when AI is being used in decisions that affect them, and they must have channels to question or contest those outcomes. Without clear communication, explanation rights, and opt-out mechanisms, accountability remains theoretical. The NAIS offers a direction, but Nigeria’s ability to lead in AI governance will depend on whether that direction becomes action. The opportunity to act while systems are still taking shape is rare. What comes next will determine whether AI deepens inequality or strengthens public trust in the digital age.

 

Conclusion

AI is not waiting for legislation to catch up. Every day, it is reshaping how Nigerians work, borrow, and care for their health, often invisibly, and without recourse. Nigeria now faces a rare window to act before these systems become too entrenched to regulate. Strengthening consumer protections is not just about catching up to global standards; it’s about building a governance model rooted in local realities. With smart reforms, Nigeria can move from reactive oversight to proactive leadership, ensuring AI serves its people, not just its markets.

Comments


bottom of page