2 Comments
User's avatar
Michael Woudenberg's avatar

I'm torn on this one. We need to start with the existential and stop that first. Because if we start with everything listed, literally nothing will happen.

If we are successful on equal benefits, elimination of bias, and robust regulation, we likely won't have anything left because it doesn't reflect reality.

Take this one for instance: Amplifying Unfairness and Discrimination.

This might be true and I hear it a lot but where? What company would accept an outcome that discriminates with zero thought? Take home loans and redlining. We say this risk assessment is bad. Yet insurance companies do this all the time especially in places like Florida. There are insurance companies who refuse to insure houses in redlined areas of Florida for the same risk reasons as home loans.

But if the bias results in an outcome you don't want that's not bias per se but accuracy.

But a regulatory body for algorithmic bias? What about the fact that an algorithm is mathematical bias? It takes a large volume of data, finds patterns and reduces it toward an outcome.

The bigger issue is we throw around the term bias without understanding how many layers of different biases exist in these systems. (See article below in eliminating bias in AI/ML)

On the one hand we poo poo looking at existential threats and so do nothing while in the other hand we focus on a million problems that are utipic and so do nothing.

https://www.polymathicbeing.com/p/eliminating-bias-in-aiml

A Z Mackay's avatar

Regardless of whether we're concerned about algorithmic bias today or hypothetical future scenarios with advanced AI, many of the technical safeguards would likely be similar. Robust capabilities like logging, monitoring, testing, and override protocols can help ensure we maintain human control and oversight over AI systems, preventing unintended harms.

Standards like DO-178C for aerospace software provide a model for developing safety-critical systems that maintain very low defect rates. Adopting similar rigor for verifying, validating, and auditing AI systems would go a long way to managing risks, whether near-term or longer-term.

Of course, technical solutions aren't the only piece of the puzzle. Inclusive development processes, diversity in the field, and thoughtful policy conversations also play a role. But pragmatic engineering practices building on proven safety methodologies can help safeguard both current and future AI in an incremental way, without blocking continued progress.

I appreciate you contributing this insightful perspective on the multifaceted approach needed.