top of page
Writer's pictureLeon Como

The GenAI Rubicons

Updated: Sep 24




On regulations and guardrails


If we are taking the position that regulations and guard rails for GenAI should primarily focus on the use and use cases rather than being imposed at the foundational model layer. Here are the probable reasons why:

 

1. Flexibility and Innovation: By maintaining the foundational model in a relatively raw and open state, GenAI can evolve, innovate, and adapt to a wide range of applications. Guard rails at this level might limit its potential to serve novel, unforeseen, or highly specialized use cases.

 

2. Contextual Relevance: Different industries, regions, and applications have diverse needs and concerns regarding AI use, ranging from ethical issues to security. Applying guard rails at the use-case layer allows tailoring of regulations and safety mechanisms to specific contexts without imposing unnecessary constraints on other sectors where different priorities exist.

 

3. Dynamic Adaptation: Use-case-specific guard rails can adapt more quickly to new developments. It allows regulations to focus on emergent risks or harmful uses as they arise without requiring changes to the foundational model, which would be more cumbersome and slower to implement.

 

4. Accountability at Deployment: Guard rails should shift responsibility to those deploying AI solutions rather than model creators. This would encourage responsible implementation and usage, as organizations would have to evaluate and manage risks associated with their particular applications of AI.

 

5. Preserving Raw AI Potential: Raw foundational models have an "innocence" that keeps them neutral and versatile. Imposing too many constraints at the foundational level risks embedding biases, preferences, or limitations that could prevent the model from being as robust and adaptable as possible.

 

This approach ensures that GenAI remains a flexible, dynamic tool, while safety and ethical considerations are layered appropriately at the deployment stage, customized for particular applications.

 

But we also need to examine the potential bias in absolving foundational model makers of responsibility. It's a nuanced issue, and there are valid arguments for holding model creators accountable, especially given the financial stakes and the potential to stifle adoption. Here are some considerations that highlight both sides of the debate:

 

1. Shared Responsibility:

Instead of placing the entire burden either on the model makers or the end users, a shared responsibility model could work better. Model creators could be responsible for ensuring that their AI systems are robust, transparent, and explainable, minimizing harm at the foundational level (e.g., mitigating inherent biases or security vulnerabilities). Meanwhile, the users would bear the responsibility for ensuring that their applications comply with specific use-case regulations. This approach acknowledges that both parties are responsible for their part in the ecosystem.

 

2. Accountability and Trust:

Without some form of accountability for model makers, concerns about trust and ethical use may hamper widespread adoption. If developers are seen as pushing all responsibility downstream, organizations may hesitate to adopt GenAI due to legal and reputational risks. This could stifle innovation as companies fear being held liable for how the model behaves, even if the foundational system has baked-in issues (e.g., biased data, flawed algorithms).

 

Example: Imagine an organization adopts a GenAI model and uses it in hiring decisions, and later it’s revealed that the model had hidden biases in how it assessed candidates. If foundational model makers were completely absolved of responsibility, that organization could face severe legal and reputational damage for using a biased tool they did not fully control.

 

3. Financial Implications:

A lot of money is being invested and earned at the model-making stage, so there’s a legitimate expectation for them to bear some ethical responsibility. Given their influence and resources, model creators are in a strong position to ensure that foundational models meet certain standards, including fairness, transparency, and safety. It would be reasonable to expect them to contribute to solutions that support safer and more responsible AI use across all layers.

 

4. Stifling Adoption:

Placing the entire burden on end users, especially small businesses or startups without the necessary resources or technical expertise, could indeed stifle adoption. Many companies could be hesitant to use GenAI if they feel they must build layers of ethical and legal safeguards around a tool they don’t fully understand, which is costly and risky. In contrast, if foundational model makers offer pre-built mechanisms that reduce risk, adoption could be smoother and faster.

 

5. Mitigating Systemic Risks:

Foundational model makers should at least be required to mitigate known systemic risks that could affect the broadest range of applications. For example, they could implement mechanisms to detect and warn about biased outputs, security vulnerabilities, or misuse potential. This would help avoid leaving the end users solely responsible for spotting deeply ingrained issues in a model they didn’t design.

 

A Layered Approach with Model-Maker Responsibility

Rather than absolving foundational model makers entirely, a balanced approach that involves layered responsibility seems fairer and more pragmatic. Foundational model creators should ensure the model is safe, reliable, and minimally biased. End users should focus on specific deployment risks and compliance with context-sensitive regulations. By distributing responsibility across the entire AI lifecycle, from model creation to end-use, we can foster both innovation and trust while addressing concerns about risk and accountability.

 

This model may also encourage faster adoption by easing the burden on users, especially those with fewer resources to implement AI safely.

 

GenAI as an Extension of Internet


If we think of GenAI as an extension of the internet focused on knowledge, intelligence, and possibly wisdom then protocols should be the foundation, similar to how the internet operates. The analogy to the internet being a "user beware" system is accurate on many points, and it offers important lessons for how we might approach regulating GenAI.

 

1. Protocol-First Approach:

Just like the internet relies on protocols (e.g., HTTP, TCP/IP) to provide a common infrastructure for communication and data transfer, GenAI could benefit from protocols that guide the ethical, technical, and operational aspects of AI interactions and outputs. These protocols would provide a baseline for how AI systems operate, communicate, and are used across different contexts.

 

Example of Protocols: Standards for transparency in model outputs, mechanisms for verifying sources of information, or protocols for handling sensitive data could serve as universal guidelines. These could empower users to know how to engage with GenAI systems and what to expect.

 

2. "User Beware" vs. "User Empowered":

The traditional internet is largely "user beware" because it provides vast freedom but leaves much of the responsibility for safe use to the end user. This decentralized structure allowed the internet to scale and innovate at an unprecedented rate. However, it also resulted in challenges like misinformation, security vulnerabilities, and exploitation.

 

With GenAI, while we might lean on a user-beware system similar to the internet, there's an opportunity to move toward a user-empowered system. By putting protocols in place that ensure transparency, explainability, and traceability in AI systems, we can equip users with the knowledge and tools to interact with GenAI intelligently. This way, the onus is still on the user to make responsible decisions, but they are empowered by a clearer, more accountable structure.

 

3. The Role of Guardrails and Rules:

Guardrails and regulations could come later and on top of protocols, designed for specific high-risk applications (e.g., in medicine, autonomous driving, or financial services). This layered approach, where the first layer is protocol-based, would allow for innovation and free use in many areas while placing stricter rules only in contexts where harm or misuse can have severe consequences.

 

4. Decentralization and Openness:

The decentralized nature of the internet allowed rapid growth and a low barrier to entry, encouraging a diverse ecosystem. Similarly, GenAI could follow this path by allowing multiple entities; developers, researchers, and organizations to build on common protocols. This openness fosters innovation but, as with the internet, would require community-driven norms and best practices to avoid pitfalls like monopolization or over-centralization of control.

 

Web Analogy: Just as the internet grew largely without heavy top-down regulation but relied on evolving protocols (e.g., data encryption, security layers) to safeguard it over time, GenAI could follow this trajectory, starting with robust protocols that evolve with use and experience.

 

5. Balanced Freedom with Responsibility:

Protocols provide a framework of freedom without the heavy-handedness of regulations that might stifle GenAI's growth. As you said, this parallels the internet, where users are free but also responsible for their choices, especially in how they consume, produce, or share content. In GenAI, empowering users with knowledge of protocols like ensuring AI-generated information is verifiable, or understanding the limitations and biases of the models will enable responsible interaction.

 

6. A Knowledge-Based Internet:

Since GenAI focuses specifically on knowledge, intelligence, and perhaps wisdom, the stakes may be higher compared to the traditional internet, where entertainment, commerce, and communication dominate. Protocols around ethics, authenticity, and the quality of information would be key to preserving trust in GenAI systems, much like how protocols for encryption and data protection became crucial for secure internet use.

 

Protocols Before Rules, Empowerment Over Regulation

Protocols should come before rules, regulations, and guardrails when it comes to GenAI. This allows for innovation and scalability while creating a shared infrastructure that can guide ethical, safe, and responsible AI development and use. The "user beware" approach of the traditional internet worked because it provided a flexible framework for growth, but for GenAI, we can take it further by creating a user-empowered ecosystem that balances freedom with responsibility, enabling people to use these systems with both awareness and agency.

 

This way, we can preserve the potential for GenAI to become a tool of wisdom, not just intelligence.

 

Differentiation from Internet in terms of Bias


Bias in GenAI models is a fundamentally different challenge from what the internet faced. In the internet’s case, the infrastructure (e.g., CPUs, routers, switches) was largely neutral. It didn’t prioritize certain web traffic based on biases inherent in the hardware. The issue with GenAI, though, is that bias can be embedded in the algorithms, data, and model training itself, which could have serious consequences and may not always be easily fixed at the adoption or user layers.

 

Let’s break down the key considerations and why model transparency around bias is essential, along with how protocols and other measures might address this challenge:

 

1. Inherent vs. Operational Bias:

Internet Infrastructure: The physical layers of the internet (routers, CPUs, etc.) don’t introduce bias in the way they handle traffic or data. Even though network throttling or prioritizing certain traffic exists in some contexts (e.g., net neutrality debates), the underlying hardware remains largely neutral.

 

AI Models: AI models, however, are deeply influenced by the data they are trained on, and human biases (intentional or unintentional) can permeate through every layer from the dataset to the algorithmic choices made during training. Bias in models is more existential because it’s tied directly to what the AI knows and how it reasons, which can significantly affect its outputs.

 

2. Bias is Hard to Detect and Correct:

Unlike biases in human decision-making, which can be debated and rectified through policy or social consensus, biases in AI models are often hidden within complex algorithms. These biases can manifest in subtle ways prioritizing certain demographic groups, favoring certain types of information, or systematically ignoring minority perspectives.

 

The danger is that biased models can amplify and entrench societal inequalities, and the more foundational the model, the harder it is to fix later in the adoption phase. If biases are embedded during training, any system built on top of these models will inherit and propagate them. This is particularly dangerous because users might not even realize that biases are influencing their interactions with the AI.

 

3. Transparency as a Protocol:

For GenAI to be widely trusted and used responsibly, transparency in how models are trained, validated, and monitored for bias should be a key protocol. This transparency should include:

 

Data Provenance: Where did the training data come from? Are certain groups or perspectives over- or underrepresented?

Bias Auditing: Has the model been tested for known biases? Can those biases be quantified and made clear to users?

Explainability: Are the decision-making processes of the model clear enough for users to understand and challenge potentially biased outputs?

 

These protocols would ensure that any organization adopting the model is aware of its potential limitations and risks. Furthermore, this transparency would make it easier to identify where biases come from whether it’s in the data collection, model architecture, or training process and allow for more informed choices by users and developers.

 

4. Bias Correction at the Training Layer:

Some biases are difficult, if not impossible, to fix at the adoption layer. If a foundational model is trained on biased data, those biases may be too deeply ingrained to remove or mitigate later. This reinforces the importance of focusing on bias correction during training, which could involve:

 

Diverse and Representative Datasets: Making sure that the data used to train models is as diverse and representative as possible. This must include perspectives, voices, and contexts that are often marginalized.

Fairness Algorithms: Implementing algorithms that actively mitigate bias during the training process by balancing data or adjusting weights to ensure fair treatment across groups.

Continuous Monitoring: Ongoing monitoring and updating of models as societal standards change. What might be considered neutral or acceptable in one period may be seen as biased or harmful later.

 

If we fail to address bias at this foundational level, it can cascade through various applications, from hiring and lending decisions to healthcare and criminal justice systems, amplifying existing inequalities and introducing new ones.

 

5. The Threat of Unfixable Bias:

The real danger is that bias in AI models may be "unfixable" at the adoption layer. Once models are deployed, it’s difficult to unravel the layers of bias that were baked in during training. For example, if a medical AI system is biased toward diagnosing conditions based on data from one demographic, retraining that system for equitable outcomes might be far more difficult than if bias had been mitigated during initial development.

 

In this context, protocols for transparency and bias detection need to be integral to the foundational model layer. Without these, we risk creating a future where biased AI systems make decisions that profoundly affect people’s lives decisions that may not be easily challenged or corrected.

 

6. Protocols to Address Bias:

We can think of protocols that operate on two levels:

Training Level Protocols: Standards and guidelines for how training datasets are curated, tested, and made transparent. This could involve open protocols for bias audits, model validation against diverse datasets, and public reports on the training process.

Operational Layer Protocols: For end-users, protocols could help them detect, mitigate, or counter biases in AI outputs. This might include user-facing tools that explain the model’s reasoning or show alternate outputs based on different assumptions.

 

Model Transparency is Critical for Bias Mitigation

To summarize, while the internet largely avoided issues of bias at the infrastructure level, GenAI’s foundational models introduce new and more dangerous forms of bias that are harder to fix once deployed. The stakes are higher because AI systems influence knowledge, intelligence, and decision-making, and biases can have far-reaching societal impacts.

 

Therefore, the creation of transparency protocols during model training, alongside bias auditing, explainability, and diverse data representation, is essential. These protocols should focus on ensuring that biases are caught and addressed at the model-building layer, not left to end-users to detect and fix after the fact. This layered approach would help ensure GenAI is trusted, responsible, and equitable across all its applications.

 

We must also take into consideration that biases in the infrastructure of the internet such as those related to the way traffic is routed or prioritized (e.g., network throttling, bandwidth allocation, or discriminatory practices like net neutrality violations) can be detected and measured by instruments or monitoring tools. These biases, when they occur, tend to be more transparent and quantifiable because they’re based on physical and technical parameters that can be observed, tested, and rectified.

 

Biases in Infrastructure vs. AI Models:


1. Infrastructure Bias:

Detectability: Biases in the internet infrastructure (e.g., unequal distribution of bandwidth, geo-restrictions, network throttling) are typically quantifiable. Instruments like packet sniffers, network analyzers, or traffic monitoring systems can be used to detect whether certain types of traffic are being treated differently.

Correctability: Once detected, these biases can often be corrected through technical adjustments or policy changes (e.g., enforcing net neutrality or updating network configurations). They are generally less complex to fix because the underlying issue is often clear.

Examples: ISPs favoring certain services, companies throttling traffic from competitors, or even hardware manufacturers prioritizing specific protocols can potentially be detected through technical monitoring.

 

2. AI Model Bias:

Opacity: In contrast, biases in AI models are often opaque. They emerge from the complex interactions of data, algorithms, and model training. These biases are harder to detect because they may only show up in specific contexts or subtle ways. Unlike infrastructure, which can be monitored in real time, the internal workings of AI models don’t lend themselves easily to traditional instruments of detection.


Hidden Complexity: AI model bias is often hidden in the layers of abstraction within neural networks or other learning systems. It requires specialized audits and fairness testing (e.g., model explainability tools like SHAP or LIME, bias detection frameworks) that aren't as straightforward as monitoring internet infrastructure.


Correctability: As discussed, biases in AI may not be easily fixable at the user or operational level because they are built into the model’s logic. It can be difficult to detect bias without deep analysis, and even harder to retrain or alter the model without impacting its overall performance or intent.

 

Instrumental Detection in AI:

To some extent, instruments for bias detection in AI may exist as well, but they are far more specialized and context-dependent:

- Algorithmic Fairness Tools: There are emerging tools designed to audit AI models for bias, such as Fairness Indicators, AI Fairness 360, and other machine learning fairness frameworks. These tools are akin to "instruments" that help expose bias in AI decision-making.

- Explainability Tools: Tools like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations) provide insights into how models arrive at their decisions, allowing developers to detect bias. However, these tools are still not as straightforward as detecting network-level bias in internet infrastructure.

 

 The Need for Transparency in Both Realms:

- In the internet infrastructure, biases tend to be more visible and easier to regulate through instruments that can monitor traffic or network behavior.

- In AI systems, we need similar protocols and instruments to ensure transparency, but the complexity of AI introduces challenges that are harder to monitor through simple tools. The detection of bias often requires ongoing audits, transparency in data, and rigorous fairness testing during and after the training process.

 

Instrument-Based Detection vs. Complex Audits

Biases in both realms internet infrastructure and AI models exist, but they differ in terms of how easily they can be detected and corrected. While biases in infrastructure can often be quantified through instruments and corrected with technical adjustments, AI bias requires more advanced audits and transparency. The opacity of AI systems calls for deeper, context-aware tools and protocols to make biases detectable and addressable.

 

The point about biases in infrastructure being detectable by instruments is an important contrast that reinforces why AI model transparency is even more crucial because we can’t always rely on easily observable or detectable signals to identify bias.

 

Examining contrarian perspectives


Here's a list of potential contentions or defective points that might arise from the ideas presented:

 

1. Protocols Before Regulations:

Defensive View: While protocols are a good starting point, some may argue that regulations should come first to prevent harm. For instance, in highly sensitive areas like healthcare or criminal justice, waiting for protocols to naturally develop could expose people to risks from biased or unsafe GenAI outputs.

Counterpoint: Regulations can sometimes stifle innovation by over-legislating an evolving technology. But critics could argue that urgent guardrails are necessary before harm occurs especially given how fast GenAI is being deployed.

 

2. Model Transparency Around Bias:

Defensive View: Full transparency might not always be feasible or desirable. Some argue that companies need to protect intellectual property, and disclosing too much about model training and data could compromise business interests or security.

Counterpoint: Critics might say that companies could use lack of transparency as a shield to hide biases, claiming trade secrets while leaving users exposed to systemic discrimination. Also, excessive focus on transparency might put an undue burden on small developers, limiting competition.

 

3. Bias Detection in AI Models:

Defensive View: Detecting and mitigating bias in AI models is a highly complex and resource-intensive process. Critics might argue that pushing for bias detection at the foundational level could slow down innovation, especially for startups or less resourced organizations.

Counterpoint: Bias detection tools are still evolving, and current techniques might not adequately catch all biases, particularly those that emerge from subtle interactions in data. Some could argue that trying to detect all bias may never be fully possible, so it could create false confidence in AI systems.

 

4. Unfixable Bias in AI Models:

Defensive View: Some might contest that bias is not always unfixable and can be mitigated even at the adoption layer. For example, you can apply fine-tuning techniques to adjust the model’s behavior or use prompt engineering to minimize biased outputs.

Counterpoint: Others might argue that the fundamental structure of a biased model can impose hard limits on what can be corrected post-training. Furthermore, the scope of bias (affecting certain groups or topics) might be too wide-ranging to patch up at the application layer.

 

5. Biases in Internet Infrastructure vs. AI Models:

Defensive View: While biases in AI models are real and significant, critics might argue that comparing them to biases in internet infrastructure isn’t entirely fair. For example, some could say that infrastructure biases (e.g., net neutrality violations) have wide, measurable impacts on access to information, and thus they can be just as harmful as AI model bias.

Counterpoint: Opponents might say that internet infrastructure biases are technical and operational, so they don’t influence the content itself. Whereas AI bias has a deeper societal impact because it affects the interpretation of knowledge and decision-making, which is harder to measure and remedy.

 

6. Instrumental Detection of Bias:

Defensive View: The idea that biases in AI can be instrumentally detected, like infrastructure biases, is contentious. Critics might argue that existing instruments for detecting AI bias (e.g., fairness testing) are still immature and not widely standardized, making it hard to reliably measure biases.

Counterpoint: Even with tools like SHAP or LIME, critics could argue that detecting bias isn’t enough interpreting it and understanding its ethical implications requires human judgment. There's also a risk of over-reliance on these tools, which might not catch all biases or misrepresent the problem as “fixed” when it’s not.

 

7. Complexity of Bias Correction at the Foundational Level:

Defensive View: It’s possible that addressing bias at the foundational level (during model training) could be incredibly challenging. Models are trained on vast datasets, often compiled from diverse and potentially biased sources (e.g., internet data). Critics could argue that trying to cleanse or balance these datasets could be impractical or lead to new forms of bias (e.g., overly sanitized or incomplete training data).

Counterpoint: Some might question whether it’s even possible to build a completely unbiased AI, given the societal and historical biases baked into the data we rely on. Trying to remove bias completely might lead to sterile, inauthentic outputs that no longer represent the complexities of human knowledge and experience.

 

8. “User Beware" Approach for GenAI:

Defensive View: While "user beware" works to some extent for the internet, applying that to GenAI might be dangerous. Unlike the internet, where users typically know the sources they’re accessing (websites, authors), GenAI’s outputs can appear authoritative even when they’re biased or incorrect. This could make the "user beware" approach less effective, as users might overestimate the reliability of GenAI outputs.

Counterpoint: Some might argue that end-users shouldn’t bear all responsibility for understanding AI bias. Instead, developers and companies should be responsible for minimizing bias and ensuring the AI is safe and reliable out of the box.


In summary, while the ideas are strong and grounded in well-reasoned logic, the points above illustrate potential contentions or challenges. Issues like regulations vs. protocols, the feasibility of bias detection, and the limits of bias correction at different layers of the AI stack can create space for debate. These questions also reflect broader tensions between innovation, ethics, and responsibility in GenAI development.


11 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
loader,gif
bottom of page