Connect with us

Digital Health

An Overview of Foundation Models & Why A Risk-Based Approach Could Be Helpful

The past couple of years has been all about magnificent AI breakthroughs in the fields of Generative AI, Large Language Models, and Machine Learning. From…

Published

on

This article was originally published by AITHORITY

The past couple of years has been all about magnificent AI breakthroughs in the fields of Generative AI, Large Language Models, and Machine Learning.

From creating flawless content and realistic images to coding (without little or no technical expertise) and creating apps and websites, AI tools have been an indispensable part of our lives.

We are witnessing a magnanimous AI revolution and a lot of it can be credited to foundation models which have been the foundation of several different kinds of AI systems and enabling them to become more productive, and efficient.

What are Foundation Models?

Foundation models can transfer knowledge gained from one context to another using machine learning techniques. While much more information is needed than a typical individual requires moving from a single task to another one, the result is fairly similar. The term foundation models was popularized by Stanford Institute for Human-Centered Artificial Intelligence. When one model is trained on huge data sets, and the same is applied to different applications, they are called foundation models. GPT-3 is a popular example of a foundation model.

According to a blog by IBM, foundations models have several risks associated with it and a risk-based approach is an ideal solution.

Recommended: The Rise of Generative AI – Understanding Its Future in Business Processes

Benefits of Foundation Models

Foundation models are smart and sophisticated enough to strengthen AI’s general-purpose technology capabilities. But since this is an evolving situation, accurately identifying the prospective advantages of foundation models can be dicey and even impossible. Even though many use cases will appear over time, foundation models are promising and impactful in solving many challenges faced by society.

For instance, choosing candidate molecules for new medications or choosing components for upcoming battery technologies both call for in-depth chemical knowledge as well as time-consuming screening and evaluation of various compounds.

Understanding Physical Attributes Of Chemicals

Scientists can quickly anticipate the 3D shape of chemicals and infer their physical attributes, such as their capacity to traverse the blood-brain barrier, with the use of IBM’s MoLFormer-XL, a foundation model developed using data from about 1.1 billion molecules.

  • IBM and Moderna recently announced a partnership to use MoLFormer models to improve the design of mRNA medications.
  • IBM collaborates with NASA to use foundation models to evaluate geographic satellite data to enhance understanding of efforts to combat climate change.

Although these applications are still in their infancy, they have the potential to significantly speed up progress on urgent issues such as climate change and healthcare.

Generative Capabilities in Text and Data

The generative powers of foundation models are essential for other prospective uses, such as fueling productivity tools that enable users to create code, write, and other sorts of material quickly.

In both professional and personal settings, partially automating time-consuming tasks can be helpful since it frees up time for people to focus more of their attention on harder or more fulfilling tasks while producing more.

A Deloitte research discovered that developers may speed up their development process by 20% when employing a code-generating tool.

For instance, a small business could design its product, make marketing materials, and build a website or app utilizing generative tools with comparatively little technical know-how or funding.

Risks of Foundation Models

Despite the enormous potential benefits, it’s critical to avoid overestimating foundation model capabilities because doing so prevents a proper assessment of the benefits and risks they can produce. We are classifying the risk associated with foundation models into three categories – input, output, and governance risks.

The majority of the potential dangers associated with training data and other techniques that impact how a model is built are also present in other forms of AI.

Recommended: Introducing AudioGPT – The Multi-Modal AI System That’s Redefining Audio Modality

Input

For instance, foundation models are not exempt from bias risks, risks relating to whether training data contains personal information, and risks relating to if training data is “poisoned” (i.e., purposefully altered to affect a model’s performance). The foundation models’ increased dangers are typically caused by the vast quantities of disordered data that are frequently used to build the model.

These include challenges with the training data’s intellectual property and copyright, including the utilization of licensed works, as well as issues with transparency and privacy, such as how creators and deployers share details about their training data and the capacity to grant data subjects’ rights like the “right to be forgotten.”

Models trained or retrained on data produced by an AI system provide new types of dangers since they might reinforce or perpetuate unwanted behaviors. There are issues concerning how foundation models are created as well because the model’s behavior may reflect the values that were used to make those decisions.

Output

The majority of foundation model output risks are brand-new, and they frequently result from those models’ generative powers. This includes the ability to create fake but convincing-looking content through hallucination and also the potential to create harmful, hateful, or inappropriate material.

Additionally, foundation models may be purposefully created or employed for evil intentions, such as the dissemination of misinformation and the covert creation of content.

Their resistance to adversarial attacks also presents new difficulties because generative AI directly benefits from techniques like prompt injection, which allow users to trick models into doing things that are otherwise forbidden by their controls.

General Governance

Foundation models present several new governance challenges because of the way they are created and implemented. First, because effectiveness scales with size and the quantity of processing power utilized in model training, building and running a foundation model may need much more energy.

Second, because the connections between developers and deployers can be intricate, it can be difficult to determine how responsibility should be distributed throughout the AI value chain.

For instance, a typical business model is a developer giving a deployer a foundation model, which the deployer then fine-tunes to their particular use case for an actual application before delivering.

Identifying where remedial action might or ought to be taken and who is responsible for doing so can be challenging when difficulties with performance are found.

Lastly, because of the intricacy of this value chain, it can be challenging to define who is entitled to the ownership of foundation models and the applications that use them.

Why Policymakers Should Adopt a Risk-Based Approach

To establish efficient AI governance and regulate AI systems, policymakers must adopt a risk-based approach.  Regulations should be commensurate to the amount of risk involved because different AI applications may differ significantly in their potential to be harmful.

An AI system that recommends TV shows to customers, for instance, has low danger of harm, while an AI system that reviews job applications may have a significant negative influence on a person’s ability to earn money. In the latter scenario, strict guidelines for accuracy, fairness, and transparency would be suitable to lower the possibility that the system is unfairly biased against job seekers.

Such a strategy has received support from policymakers all over the world. Important regulations for AI governance prioritize oversight determined by the level of danger posed by an AI system.

Recommended: Presenting Phoenix – The Multilingual LLM That Aims to Democratize ChatGPT

European Union AI Act draft

According to the intended use of an AI system, the draft European Union AI Act sets risk classifications that range from “unacceptable risk” to “low or minimal risk.”

In various contexts, legal obligations are proportionate to the level of risk. For example, applications that have an elevated likelihood to exploit vulnerable groups are outright prohibited, and systems that pose risks to fundamental human rights are subject to standards regarding information governance, transparency, and human oversight.

National Institute for Standards and Technology (NIST) – AI Risk Management Framework

It is based on the idea of risk prioritization, which states that enhanced risk management is necessary for AI systems that pose larger hazards. It also acknowledges that an AI system’s risk is very contextual and depends on where and how it is used.

Although the framework does not establish official legislative requirements, it can help lay the groundwork for future policymaking that is guided by concrete, real-world instances of responsible AI governance.

Model Artificial Intelligence Governance Framework 

Regulators created tools to assist businesses in deploying AI systems with the proper governance requirements. The methodology recognizes that the likelihood and degree of potential AI damages will depend on the setting of its deployment and employs a risk-based strategy to determine the qualities that are most successful for fostering stakeholders’ trust in AI.

Benefits of the Risk-Based Approach

  • Regulation that is specifically tailored to the danger that an AI system provides a high degree of protection while allowing for an adaptable and evolving regulatory framework.
  • It reduces pointless regulatory barriers, enables greater economic use of AI, and provides strong consumer protections independent of the technology that underlies it. Furthermore, technology neutrality is crucial since it guarantees the future-proofing of any rulemaking.

Whether an AI system employs a foundation model, a risk-based strategy makes sure AI deployers are aware of their obligation to make sure it conforms with all applicable regulatory standards for a specific use case.

Arguments Against the Risk-Based Approach

Some have urged for a departure from this risk-based strategy due to the increasing number of foundation models and the complexity they add to the AI value chain.

They contend that foundation models are flexible enough to be applied to a wide range of situations, some of which might be dangerous or high-risk, and that the technology in itself should be seen as inherently risky.

Developers should share some of their accountability for meeting the legal requirements of downstream applications since deployers of AI systems based on foundation models may not always have control over the development of the underlying model. This could be a grave mistake.

Suggestions for Policymakers & Addressing New Risks

Making sure that the AI policy structure is risk-based and suitably centered on the prospective deployers of AI systems is the best method for policymakers to meaningfully address concerns over foundation models.

This ensures that all AI systems, including those built on foundation models, are governed in a precise and efficient manner to reduce risk. Here are a few points that can help policymakers address new risks of foundation models.

Encourage Transparency

Employers of foundation models should have sufficient knowledge of and access to the models to ensure they are acting appropriately and can comply with any applicable legal requirements. Determining if a specific model is suitable for usage in a particular environment can depend critically on information concerning risk mitigation, governance of data, technical documentation, keeping records, accountability, human oversight, correctness, and cybersecurity of foundation models.

Factsheet

This tool promotes improved AI governance and gives users and deployers pertinent knowledge about how an artificial intelligence (AI) framework or service was made. It contains information like performance metrics, evaluation of biases, and energy consumption.

Legislators should establish official guidelines for what data should be included in FactSheets and mandate that AI developers provide such paperwork. By doing this, compliance would be made simpler and foundation model risks would be greatly reduced.

Recommended: General Purpose AI (GPAI): What You Need to Know  

Flexible Approaches

The need for adaptable soft law approaches for AI governance should be recognized by policymakers as they create risk-based regulations for AI. This is especially important for defining how accountability for various actors across the AI value chain should be distributed.

For instance, contractual agreements between a developer and a deployer are often the best way to address concerns around accountability for an AI system’s performance. It is also reasonable for a deployer to require that, following the deployment of their AI system, their developer fix performance concerns with the foundation model. Given the wide range of generative AI applications that could be used and the various degrees of control that AI developers and deployers could desire, policymakers should safeguard their right to contractually negotiate and define roles.

Understanding the Difference Between Different Business Models

Employees might utilize an enterprise-facing chatbot to learn about HR policies, but this chatbot could produce violent or unpredictable output, raising questions about security at work and whether employees can get correct HR information. If the chatbot were intended for consumers, the risk associated with the app would be very different because more people, including kids, may be exposed to its behavior.

Many times, it would be beneficial for policymakers to distinguish between closed-domain applications like enterprise AI assistants and open-domain applications like the use of generative AI to enhance online search.

The limited functionality of the AI system in closed-domain applications restricts the kinds of dangers it can create, whereas general-purpose, open-domain apps can create a larger range of problems. The degree of control and accountability that each actor along the AI value chain has over the deployment of an AI system varies. Differentiating between developers and deployers is a task for policymakers.

Conscious and Increased Study

To better understand these concerns and create clear legislative advice to preserve IP rights and innovation, policymakers should collaborate closely with businesses, artists, producers of content, and other pertinent stakeholders.

Given the potential impact these issues might have on creativity and rivalry, waiting for drawn-out legal proceedings to remove misunderstanding on an as-needed basis is not a desirable option. Accessibility to the technical resources required to create and assess AI systems is another essential element for understanding new hazards. Large quantities of processing power are needed to create foundation models, which might be prohibitively expensive for smaller research facilities, academic institutions, and other parties with an interest in examining AI systems.

Government officials ought to spend money on shared research infrastructure. For instance, the National AI Study Resource Task Force of the United States advised that the United States invest $2.25 billion on the technical resources required to carry out this study, such as processing, data, training, and resources for software.

Lastly, policymakers should support and encourage the advancement of stronger scientific evaluation techniques for foundation models. The efficacy, bias, accuracy, and other important aspects of an AI system can be measured using a variety of metrics, however, the use of generative AI may render these measurements inaccurate or ineffective.

Final Thoughts

Foundation models are rising stars that haven’t been deployed all that widely. As they gain strength and become more integrated into society and the economy, new hazards could arise, some anticipated concerns might not materialize, and some risks could be mitigated by social conventions or market forces sans requiring policymaker action. Given the tremendous advantages of foundation models, securing both society and the economy from its potential dangers will be crucial to ensuring that technological advancement is a positive force.  While ensuring that the approach to managing AI stays risk-based and technology-neutral, policymakers should move quickly to comprehend more and minimize the dangers of foundation models.

[To share your insights with us, please write to sghosh@martechseries.com].

The post An Overview of Foundation Models & Why A Risk-Based Approach Could Be Helpful appeared first on AiThority.

artificial intelligence

machine learning


Digital Health

Keep it Short

By KIM BELLARD OK, I admit it: I’m on Facebook. I still use Twitter – whoops, I mean X. I have an Instagram account but don’t think I’ve ever posted….

Continue Reading
Life Sciences

Asian Fund for Cancer Research announces Degron Therapeutics as the 2023 BRACE Award Venture Competition Winner

The Asian Fund for Cancer Research (AFCR) is pleased to announce that Degron Therapeutics was selected as the winner of the 2023 BRACE Award Venture Competition….

Continue Reading
Digital Health

Seattle startup Olamedi building platform to automate health clinic communications

A new Seattle startup led by co-founders with experience in health tech is aiming to automate communication processes for healthcare clinics with its software…

Continue Reading

Trending