Benzinga

España
Italia
대한민국
日本
Français
Benzinga Edge
Benzinga Research
Benzinga Pro

  • Get Benzinga Pro
  • Data & APIs
  • Events
  • Premarket
  • Advertise
Contribute
España
Italia
대한민국
日本
Français

Benzinga

  • Premium Services
  • Financial News
    Latest
    Earnings
    Guidance
    Dividends
    M&A
    Buybacks
    Interviews
    Management
    Offerings
    IPOs
    Insider Trades
    Biotech/FDA
    Politics
    Healthcare
    Small-Cap
  • Markets
    Pre-Market
    After Hours
    Movers
    ETFs
    Options
    Cryptocurrency
    Commodities
    Bonds
    Futures
    Mining
    Real Estate
    Volatility
  • Ratings
    Analyst Color
    Downgrades
    Upgrades
    Initiations
    Price Target
  • Investing Ideas
    Trade Ideas
    Long Ideas
    Short Ideas
    Technicals
    Analyst Ratings
    Analyst Color
    Latest Rumors
    Whisper Index
    Stock of the Day
    Best Stocks & ETFs
    Best Penny Stocks
    Best S&P 500 ETFs
    Best Swing Trade Stocks
    Best Blue Chip Stocks
    Best High-Volume Penny Stocks
    Best Small Cap ETFs
    Best Stocks to Day Trade
    Best REITs
  • Money
    Investing
    Cryptocurrency
    Mortgage
    Insurance
    Yield
    Personal Finance
    Forex
    Startup Investing
    Real Estate Investing
    Prop Trading
    Credit Cards
    Stock Brokers
Research
My Stocks
Tools
Free Benzinga Pro Trial
Calendars
Analyst Ratings Calendar
Conference Call Calendar
Dividend Calendar
Earnings Calendar
Economic Calendar
FDA Calendar
Guidance Calendar
IPO Calendar
M&A Calendar
Unusual Options Activity Calendar
SPAC Calendar
Stock Split Calendar
Trade Ideas
Free Stock Reports
Insider Trades
Trade Idea Feed
Analyst Ratings
Unusual Options Activity
Heatmaps
Free Newsletter
Government Trades
Perfect Stock Portfolio
Easy Income Portfolio
Short Interest
Most Shorted
Largest Increase
Largest Decrease
Calculators
Margin Calculator
Forex Profit Calculator
100x Options Profit Calculator
Screeners
Stock Screener
Top Momentum Stocks
Top Quality Stocks
Top Value Stocks
Top Growth Stocks
Compare Best Stocks
Best Momentum Stocks
Best Quality Stocks
Best Value Stocks
Best Growth Stocks
Connect With Us
facebookinstagramlinkedintwitteryoutubeblueskymastodon
About Benzinga
  • About Us
  • Careers
  • Advertise
  • Contact Us
Market Resources
  • Advanced Stock Screener Tools
  • Options Trading Chain Analysis
  • Comprehensive Earnings Calendar
  • Dividend Investor Calendar and Alerts
  • Economic Calendar and Market Events
  • IPO Calendar and New Listings
  • Market Outlook and Analysis
  • Wall Street Analyst Ratings and Targets
Trading Tools & Education
  • Benzinga Pro Trading Platform
  • Options Trading Strategies and News
  • Stock Market Trading Ideas and Analysis
  • Technical Analysis Charts and Indicators
  • Fundamental Analysis and Valuation
  • Day Trading Guides and Strategies
  • Live Investors Events
  • Pre market Stock Analysis and News
  • Cryptocurrency Market Analysis and News
Ring the Bell

A newsletter built for market enthusiasts by market enthusiasts. Top stories, top movers, and trade ideas delivered to your inbox every weekday before and after the market closes.

  • Terms & Conditions
  • Do Not Sell My Personal Data/Privacy Policy
  • Disclaimer
  • Service Status
  • Sitemap
© 2026 Benzinga | All Rights Reserved
Sarah Armstrong Smith CIO 2
February 3, 2026 10:12 AM 4 min read

Why Generative AI Governance Is Becoming A Balance-Sheet Issue

by Tabish Ali Benzinga Contributor
Follow

Generative AI adoption is moving faster than most organizations' ability to govern it, and that gap is turning into a commercial risk. McKinsey reports that 88% of organizations now use AI in at least one business function, but many are still early in scaling it responsibly.

Trust is already tightening the constraint. New data from Edelman shows global trust in AI companies has fallen from 61% to 53% over five years, a signal that reputational shock can compound quickly after incidents. Confidence is also uneven by market. For example, 72% of respondents in China say they trust AI, versus 32% in the United States, creating very different adoption, scrutiny, and regulatory pressures across regions.

Regulation is catching up. The European Parliament notes that parts of the EU AI Act have already started applying, with further obligations rolling in on defined timelines, including transparency requirements and later-stage compliance for high-risk systems. For leaders, this is no longer just an ethics debate. It is governance, resilience, and balance-sheet exposure.

In this exclusive interview with the Champions Speakers Agency, Sarah Armstrong-Smith, a senior cybersecurity and resilience leader, sets out what meaningful oversight of generative AI requires and how trust can be rebuilt at scale.

Meaningful Oversight of Generative AI Requires Enforceable Standards and Continuous Accountability

Meaningful oversight requires moving beyond voluntary principles and codes of conduct toward enforceable standards, independent audits and transparent reporting. Regulators need visibility into training data sources, safety testing, incident response processes and model governance structures. Without this, oversight becomes symbolic rather than substantive.

There should also be mandatory red teaming, risk assessments and post deployment monitoring, especially for models embedded in social platforms or used at large scale. These controls must be continuous, not one off exercises.

Arguably given the volume of data, and daily transactions, social media platforms can lead safety standards, rather than flout them.

Technology Leaders Must Rebuild Trust as Generative AI Scales Across Platforms and Markets

The first lesson is integrity. AI systems, no matter how advanced, are not fully understood and unpredictable, and the public expects companies to acknowledge this. Upholding accountability and transparency, without limitations is essential for rebuilding trust.

The second lesson is that safety must be designed in, not bolted on. Reactive fixes when the pressure starts to build are not enough; responsible and reliable AI requires anticipating misuse, adversarial behaviour and societal impact before deployment. Grok's experience reinforces this point at a much larger scale.

Finally, leaders must recognise that trust is cumulative. Every incident, and how companies choose to respond shapes public perception of the entire industry. Companies that prioritise responsible innovation, and doing the right thing from the outset will be the ones that maintain credibility.

What AI Governance Should Look Like Inside Organizations

Treat deployment as a safety and security imperative, not a product decision. Most incidents and failures happen after release, not during development. Companies should conduct adversarial red teaming, stress test models in realistic environments, apply strict content filters and monitoring, and establish kill switches and rollback plans.

Minimise data exposure by design. Use data minimisation, set clear boundaries on what you store or use for training, implement tiered access controls, and adopt privacy-preserving architectures.

Responsible and reliable AI isn't a governance; it requires continuous oversight as models continue to build and grown in functionality capability. That means regular audits, monitoring for drift, incident reporting mechanisms, clear accountability at board level to proactively and publicly address failures.

What Individuals Need to Understand About Image Misuse and Privacy in AI Systems

The simplest point of reference is to assume that anything uploaded can be copied, altered or inferred upon. Even if a platform claims not to train on your data, images can still be screenshotted, scraped, used for impersonation or used to infer location, habits or relationships.

In today's digital environment it can sound counterintuitive to tell individuals to limit public posting, remove metadata, avoid identifiable backgrounds and use platform privacy settings aggressively. Small changes dramatically can dramatically reduce exposure, but it puts the onus on the individuals and limits their ability to enjoy and use social and AI platforms.

And importantly, know your rights when using different platforms. For example, under many data protection laws, you can request deletion, challenge automated processing and object to your data being used for training

This is why its so important for the service providers to help bridge the gap with implementing and enforcing safety and security protocols. This can also include protective technologies such as watermarking, adversarial filters, reverse image monitoring and identity protection services.

This exclusive feature with Sarah Armstrong-Smith was prepared by Tabish Ali of the AI Speakers Agency.

Market News and Data brought to you by Benzinga APIs

© 2026 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.


Posted In:
OpinionInterviewGeneralcontributors
Beat the Market With Our Free Pre-Market Newsletter
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter
Comments
Loading...