07-17-2024

The Role of Human Oversight in AI. Where is your human?

Shak Schiff

The role of human oversight in AI
Image by Alexandra_Koch from Pixabay

The role of human oversight in AI is necessary as the digital world outpaces the social organization of contemporary society. Today, one of the biggest questions of the digital age is: ‘Where is your human?’ It is a question with origins in artificial intelligence and automation that goes to the heart of some of the greatest challenges of our time.

The Internet’s Overflowing Garbage

The internet is a gift, but more than not, it remains a mess – an information dump riddled with factual errors, distorted data and useless fluff, not to mention rogue trolls who vigilantly spread chaos. In computer science, there is the old saying: ‘garbage in, garbage out’.. where the mindful reader knows that poor data that flows into AI systems will cause catastrophic – misdirected, inaccurate, potentially harmful – outputs.

Understanding Garbage In, Garbage Out

Garbage in, garbage out: it’s a simple idea, yet one of great importance. The quality of your output is dependent on the quality of your input. It’s another Law of Systemics. Translate crap data to a computing machine, and it will produce crap results. Feed an imperfect language model with imperfect data, and a crap response will be the outcome.

Real-World Examples

Look at Google’s bots posted ‘confidential legacy’ Content Warehouse API documentation to GitHub earlier in the year. It was supposed to be automatically finding and filtering sensitive data, but it didn’t mark the data as sensitive, and a huge security breach resulted. People tried to remove it from GitHub, but it was already there, and still is; still out in the public domain, still at risk. It can happen – and when it does, it could have been terrible. Hope is not a strategy.

Automation and Its Limits

Automation is terrific at dealing with one predictable variable, but struggles when we introduce multivariable situations, complexity or surprises. Look at large language model AIs currently in the public domain that can provide inaccurate, misleading or just outright dangerous information.

The Danger of Unverified Information

Systems such as GPT-4 are immense – and they are unconstrained by their inability to assess the accuracy of what they generate. It’s easy to see how this might promote the proliferation of misinformation; an AI tool, for instance, might offer a diagnosis of a condition based on input data that is itself seriously flawed. A human would think twice before writing that it’s probably relevant to a skin condition, but an AI cotyping that text won’t consider whether that conclusion is plausible. Massive productivity-enhancing tools like GPT-4 threaten to undermine society and the economy, and even put lives at risk.



The Importance of Context

It’s almost always context that causes most AI systems problems. They’re terrific at certain point-scoring, rules-based tasks but they do abysmally on everything that involves understanding context: nuances of human language; subtle shifts in behavior. The result is responses that are inappropriate, even harmful. Ask anyone working in tech right now, and they’ll tell you exactly what context they have to sort out with through the role of human oversight in AI.

The Vital Role of Human Oversight

Human oversight is necessary and essential because it includes the ability to think about, as well as to act. ‘Machines cannot do ethics and they cannot do nuance. What they can do is a lot of oversight. And they can do nuance and ethics with oversight’ Much more than that, human oversight means that humans are engaged in a process of thinking about or considering – and not just seeing, but seeing well and, along with that, acting well.

Ethical Considerations

Ethics is not built into these systems; their choices are not inherently moral in nature. AIs can do only what they’ve been told to do, based on identifiable patterns and data sets which humans provide; they have no inherent moral understanding. If AI is to act ethically, humans will have to supervise it to make sure it doesn’t step over the line into danger, unfairness or harm.

Garbage In, Garbage Out Revisited

As the maxim goes, GIGO (garbage in, garbage out). But even the best possible data cannot replace human agency. Machines are third-party actors – powerful ones at that – but they still need people to ensure that they are used in the right way.

Ensuring Data Quality

It is the role of humans to correct the bad inputs that are apt to crop up and to make sure that the AI system is ingesting only data of high quality. This means that data must be continuously validated, scrubbed and cleansed, and enhanced – all to ensure that the reliability and accuracy of the system is enhanced.

Continuous Monitoring

We have learned that AI systems are not ‘set-and-forget’ systems that simply turn on and stay on. They have to be checked and revised to prevent drifting. Key to this approach of using humans to check systems is evaluation: at regular intervals, people inspect the system’s performance, identify problems, and adjust the system.

Conclusion

At a time when tech developments can seem to turn on a dime, the question ‘Where is your human?’ could hardly be more applicable. For all the potential of AI and automation, they are not perfect systems. Human oversight is critical to making them work safely, ethically and effectively.

Want to know more about the role of human oversight in AI, and how you can incorporate it into your software development process? Get in touch today to discuss the details with our experts.

Let's start a new project together
quote

The level of dedication and quality from BadTesting is one you don’t find often. Their knowledge of design and development best practices allows them to easily communicate issues across multiple teams.