Generative AI adoption is moving faster than most organizations' ability to govern it, and that gap is turning into a commercial risk. McKinsey reports that 88% of organizations now use AI in at least one business function, but many are still early in scaling it responsibly.
Trust is already tightening the constraint. New data from Edelman shows global trust in AI companies has fallen from 61% to 53% over five years, a signal that reputational shock can compound quickly after incidents. Confidence is also uneven by market. For example, 72% of respondents in China say they trust AI, versus 32% in the United States, creating very different adoption, scrutiny, and regulatory pressures across regions.
Regulation is catching up. The European Parliament notes that parts of the EU AI Act have already started applying, with further obligations rolling in on defined timelines, including transparency requirements and later-stage compliance for high-risk systems. For leaders, this is no longer just an ethics debate. It is governance, resilience, and balance-sheet exposure.
In this exclusive interview with the Champions Speakers Agency, Sarah Armstrong-Smith, a senior cybersecurity and resilience leader, sets out what meaningful oversight of generative AI requires and how trust can be rebuilt at scale.
Meaningful Oversight of Generative AI Requires Enforceable Standards and Continuous Accountability
Meaningful oversight requires moving beyond voluntary principles and codes of conduct toward enforceable standards, independent audits and transparent reporting. Regulators need visibility into training data sources, safety testing, incident response processes and model governance structures. Without this, oversight becomes symbolic rather than substantive.
There should also be mandatory red teaming, risk assessments and post deployment monitoring, especially for models embedded in social platforms or used at large scale. These controls must be continuous, not one off exercises.
Arguably given the volume of data, and daily transactions, social media platforms can lead safety standards, rather than flout them.
Technology Leaders Must Rebuild Trust as Generative AI Scales Across Platforms and Markets
The first lesson is integrity. AI systems, no matter how advanced, are not fully understood and unpredictable, and the public expects companies to acknowledge this. Upholding accountability and transparency, without limitations is essential for rebuilding trust.
The second lesson is that safety must be designed in, not bolted on. Reactive fixes when the pressure starts to build are not enough; responsible and reliable AI requires anticipating misuse, adversarial behaviour and societal impact before deployment. Grok's experience reinforces this point at a much larger scale.
Finally, leaders must recognise that trust is cumulative. Every incident, and how companies choose to respond shapes public perception of the entire industry. Companies that prioritise responsible innovation, and doing the right thing from the outset will be the ones that maintain credibility.
What AI Governance Should Look Like Inside Organizations
Treat deployment as a safety and security imperative, not a product decision. Most incidents and failures happen after release, not during development. Companies should conduct adversarial red teaming, stress test models in realistic environments, apply strict content filters and monitoring, and establish kill switches and rollback plans.
Minimise data exposure by design. Use data minimisation, set clear boundaries on what you store or use for training, implement tiered access controls, and adopt privacy-preserving architectures.
Responsible and reliable AI isn't a governance; it requires continuous oversight as models continue to build and grown in functionality capability. That means regular audits, monitoring for drift, incident reporting mechanisms, clear accountability at board level to proactively and publicly address failures.
What Individuals Need to Understand About Image Misuse and Privacy in AI Systems
The simplest point of reference is to assume that anything uploaded can be copied, altered or inferred upon. Even if a platform claims not to train on your data, images can still be screenshotted, scraped, used for impersonation or used to infer location, habits or relationships.
In today's digital environment it can sound counterintuitive to tell individuals to limit public posting, remove metadata, avoid identifiable backgrounds and use platform privacy settings aggressively. Small changes dramatically can dramatically reduce exposure, but it puts the onus on the individuals and limits their ability to enjoy and use social and AI platforms.
And importantly, know your rights when using different platforms. For example, under many data protection laws, you can request deletion, challenge automated processing and object to your data being used for training
This is why its so important for the service providers to help bridge the gap with implementing and enforcing safety and security protocols. This can also include protective technologies such as watermarking, adversarial filters, reverse image monitoring and identity protection services.
This exclusive feature with Sarah Armstrong-Smith was prepared by Tabish Ali of the AI Speakers Agency.
© 2026 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

