Sunday, October 1, 2023
  • Login
  • Register
btclive365.com
  • Home
  • Cryptocurrency News Today
  • Bitcoin (BTC)
  • Altcoins
  • Blockchain
  • Crypto Mining
  • CryptoCurrency Predictions
  • Defi
  • Dogecoins
  • Earn Crypto
  • Ethereum (ETH)
  • Forex Trading
  • ICO
  • Litecoin ( LTC )
  • NFT
  • Ripple
  • Trend cryptocurrency
No Result
View All Result
  • Home
  • Cryptocurrency News Today
  • Bitcoin (BTC)
  • Altcoins
  • Blockchain
  • Crypto Mining
  • CryptoCurrency Predictions
  • Defi
  • Dogecoins
  • Earn Crypto
  • Ethereum (ETH)
  • Forex Trading
  • ICO
  • Litecoin ( LTC )
  • NFT
  • Ripple
  • Trend cryptocurrency
No Result
View All Result
btclive365.com
No Result
View All Result
Home Blockchain

How to establish secure AI+ business models

btclive365 by btclive365
September 18, 2023
in Blockchain
0
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Enterprise adoption of AI has doubled over the past five years, with CEOs today stating that they face significant pressure from investors, creditors and lenders to accelerate adoption of generative AI. This is largely driven by a realization that we’ve crossed a new threshold with respect to AI maturity, introducing a new, wider spectrum of possibilities, outcomes and cost benefits to society as a whole.

Many enterprises have been reserved to go “all in” on AI, as certain unknowns within the technology erode inherent trust. And security is typically viewed as one of these unknowns. How do you secure AI models? How can you ensure this transformative technology is protected from cyberattacks, whether in the form of data theft, manipulation and leakage or evasion, poisoning, extraction and inference attacks?

The global sprint to establish an AI lead—whether amongst governments, markets or business sectors—has spurred pressure and urgency to answer this question. The challenge with securing AI models stems not only from the underlying data’s dynamic nature and volume, but also the extended “attack surface” that AI models introduce: an attack surface that is new to all. Simply put, to manipulate an AI model or its outcomes for malicious objectives, there are many potential entrypoints that adversaries can attempt to compromise, many of which we’re still discovering.

But this challenge is not without solution. In fact, we’re experiencing the largest crowdsourced movement to secure AI that any technology has ever instigated. The Biden-Harris Administration, DHS CISA and the European Union’s AI Act have mobilized the research, developer and security community to collectively work to drive security, privacy and compliance for AI.

Securing AI for the enterprise

It is important to understand that security for AI is broader than securing the AI itself. In other words, to secure AI, we are not confined to the models and data solely. We must also consider the enterprise application stack that an AI is embedded into as a defensive mechanism, extending protections for AI within it. By the same token, because an organization’s infrastructure can act as a threat vector capable of providing adversaries with access to its AI models, we must ensure the broader environment is protected.

To appreciate the different means by which we must secure AI—the data, the models, the applications, and full process—we must be clear not only about how AI functions, but exactly how it is deployed across various environments.

The role of an enterprise application stack’s hygiene

An organization’s infrastructure is the first layer of defense against threats to AI models. Ensuring proper security and privacy controls are embedded into the broader IT infrastructure surrounding AI is key. This is an area in which the industry has a significant advantage already: we have the know-how and expertise required to establish optimal security, privacy, and compliance standards across today’s complex and distributed environments. It’s important we also recognize this daily mission as an enabler for secure AI.

For example, enabling secure access to users, models and data is paramount. We must use existing controls and extend this practice to securing pathways to AI models. In a similar vein, AI brings a new visibility dimension across enterprise applications, warranting that threat detection and response capabilities are extended to AI applications.

Table stake security standards—such as employing secure transmission methods across the supply chain, establishing stringent access controls and infrastructure protections, as well as strengthening the hygiene and controls of virtual machines and containers—are key to preventing exploitation. As we look at our overall enterprise security strategy we should reflect those same protocols, policies, hygiene and standards onto the organization’s AI profile.

Usage and underlying training data

Even though the AI lifecycle management requirements are still becoming clear, organizations can leverage existing guardrails to help secure the AI journey. For example, transparency and explainability are essential to preventing bias, hallucination and poisoning, which is why AI adopters must establish protocols to audit the workflows, training data and outputs for the models’ accuracy and performance. Add to that, the data origin and preparation process should be documented for trust and transparency. This context and clarity can help better detect anomalies and abnormalities that might present in the data at an early stage.

Security must be present across the AI development and deployment stages—this includes enforcing privacy protections and security measures in the training and testing data phases. Because AI models learn from their underlying data continually, it’s important to account for that dynamism and acknowledge potential risks in data accuracy, and incorporate test and validation steps throughout the data lifecycle. Data loss prevention techniques are also essential here to detect and prevent SPI, PII and regulated data leakage through prompts and APIs.

Governance across the AI lifecycle

Securing AI requires an integrated approach to building, deploying and governing AI projects. This means building AI with governance, transparency and ethics that support regulatory demands. As organizations explore AI adoption, they must evaluate open-source vendors’ policies and practices regarding their AI models and training datasets as well as the state of maturity of AI platforms. This should also account for data usage and retention—knowing exactly how, where and when the data will be used, and limiting data storage lifespans to reduce privacy concerns and security risks. Add to that, procurement teams should be engaged to ensure alignment with the current enterprises privacy, security and compliance policies, and guidelines, which should serve as the base of any AI policies that are formulated.  

Securing the AI lifecycle includes enhancing current DevSecOps processes to include ML—adopting the processes while building integrations and deploying AI models and applications. Particular attention should be paid to the handling of AI models and their training data: training the AI pre-deployment and managing the versions on an ongoing basis is key to handling the system’s integrity, as is continuous training. It is also important to monitor prompts and people accessing the AI models.

By no means is this a comprehensive guide to securing AI, but the intention here is to correct misconceptions around securing AI. The reality is, we already have substantial tools, protocols, and strategies available to us for secure deployment of AI.

Best practices to secure AI

As AI adoption scales and innovations evolve, so will the security guidance mature, as is the case with every technology that’s been embedded into the fabric of an enterprise across the years. Below we share some best practices from IBM to help organizations prepare for secure deployment of AI across their environments:

  1. Leverage trusted AI by evaluating vendor policies and practices.
  2. Enable secure access to users, models and data.
  3. Safeguard AI models, data and infrastructure from adversarial attacks.
  4. Implement data privacy protection in the training, testing and operations phases.
  5. Conduct threat modeling and secure coding practices into the AI dev lifecycle.
  6. Perform threat detection and response for AI applications and infrastructure.
  7. Assess and decide AI maturity through the IBM AI framework.

See how IBM accelerates secure AI for businesses

Distinguished Engineer, Master Inventor, CTO, IBM Consulting Cybersecurity Services



Source link

Previous Post

Is Bitcoin Cash On The Road to $300?

Next Post

FOMC versus BTC price ‘local bottom’ — 5 things to know in Bitcoin this week

btclive365

btclive365

Next Post

FOMC versus BTC price 'local bottom' — 5 things to know in Bitcoin this week

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected test

  • 23.8k Followers
  • 99 Subscribers
  • Trending
  • Comments
  • Latest

Amazon Announces Another Mass Layoff Affecting 9,000 Personnel

March 21, 2023

what does forex market open times?

April 14, 2023

How to sell Bitcoin – Bitcoin Magazine

November 9, 2022

Add a Tangible Asset to Your Portfolio: Buy Gold

January 7, 2023

Bitcoin price hits 2-week lows as FTX ‘bank run’ drains BTC reserves

1

Building Homesteader Lifestyle With Bitcoin – Bitcoin Magazine

1

Arbitrum dealings activity rockets 550% since August: Delphi Digital

0

Bitcoin․com Doubles Down on Self-Custody With Launch of Verse DEX – Press release Bitcoin News

0

Here Are The Major Takeaways From The Ripple Victory Party

October 1, 2023

Why the Next US President Needs to Pardon Bitcoin Pioneer Ross Ulbricht

October 1, 2023

Brazil Embraces Blockchain for Enhanced Digital Identity Verification

October 1, 2023

Aavegotchi x The Sandbox Forge Game with $SAND Rewards

October 1, 2023

Recent News

Here Are The Major Takeaways From The Ripple Victory Party

October 1, 2023

Why the Next US President Needs to Pardon Bitcoin Pioneer Ross Ulbricht

October 1, 2023

Brazil Embraces Blockchain for Enhanced Digital Identity Verification

October 1, 2023

Aavegotchi x The Sandbox Forge Game with $SAND Rewards

October 1, 2023

We deliver up-to-date, breaking crypto news about the latest Bitcoin, Ethereum, Blockchain, NFTs, and Altcoin trends and happenings

Follow Us

Browse by Category

  • Altcoins
  • Bitcoin (BTC)
  • Blockchain
  • Crypto Mining
  • Cryptocurrency News Today
  • CryptoCurrency Predictions
  • Defi
  • Dogecoins
  • Earn Crypto
  • Ethereum (ETH)
  • Forex Trading
  • ICO
  • Litecoin ( LTC )
  • NFT
  • Ripple
  • Trend cryptocurrency

Recent News

Here Are The Major Takeaways From The Ripple Victory Party

October 1, 2023

Why the Next US President Needs to Pardon Bitcoin Pioneer Ross Ulbricht

October 1, 2023
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2022 btclive365 All Right Rivered .

No Result
View All Result
  • Home
  • Cryptocurrency News Today
  • Bitcoin (BTC)
  • Altcoins
  • Blockchain
  • Crypto Mining
  • CryptoCurrency Predictions
  • Defi
  • Dogecoins
  • Earn Crypto
  • Ethereum (ETH)
  • Forex Trading
  • ICO
  • Litecoin ( LTC )
  • NFT
  • Ripple
  • Trend cryptocurrency

© 2022 btclive365 All Right Rivered .

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In