AI of, by, and for the People
How can we ensure that AI promotes the public good? PRAISE is a new framework for aligning public and private sector incentives for AI development and use.
By Thomas Gilbert and Jiri Rutner
Integrating AI systems poses a deep challenge to CTOs and CIOs in the public sector: how to balance passive data collection with active public engagement to manage risks and ensure responsible oversight? Strategic resource allocation is essential, as is working with chief administrative officers from counties and city managers from municipalities to develop governance frameworks that address risks proactively. Recent shifts in public sentiment and technological progress reveal a major opportunity: CTOs and CIOs can leverage open governance principles to ensure AI projects are not only cutting-edge but also accountable, inclusive, and aligned with societal needs.
As an example, consider finetuning, a technical method that lies at the heart of today’s AI gold rush. In practice, finetuning operates on trial-and-error instead of public trust or sensitivity to contexts of use. As a result, it places users in a passive relationship with AI capabilities. In a workshop held at The New York Academy of Sciences in May 2024, leading researchers, public policy voices, and open source advocates came together to articulate an alternative vision for AI development. Participants discussed how to open finetuning up to public scrutiny and hold its developers accountable.
Synthesizing insights from that workshop, Hortus AI proposes a framework of Public and Responsible AI through Societal Empowerment (PRAISE) to cultivate the expression and integration of public values in AI applications. The key idea behind PRAISE is not only that AI models are trained on active feedback, but that the public sector helps decide whom they are built for, what use cases matter most, and how models should behave in context. For this to work, the public’s relationship with AI must be active and not merely passive.
An Increasingly Passive World
In local government, challenges are magnified by increasing demands to do more with fewer resources. This dynamic often keeps local government teams in a reactive or passive state. Unless a CIO or CTO is well-resourced or has the support of decision-makers or political leadership to secure funding for proactive technology initiatives, their ability to lead transformative work remains constrained. Additionally, local governments frequently face brain drain and rigid pay structures, which further complicates efforts to recruit and retain technical talent.
Meanwhile, private AI development depends on three things: capital, data, and public goodwill. Today, public goodwill is finite and dissolving: just 35% of the American public now trusts companies that build and sell AI tools. The consequences are severe, as the alignment between company incentives and consumer demand depends on our collective willingness to keep playing with what is deployed. As such, dwindling public support for leading GenAI providers constitutes a major form of market failure that local governments could help fix.
There are three reasons for this market failure. The first is public impacts: from self-driving cars to social media recommendations, software platforms now impact entire populations and not just individual users.1 The second is a growing gap between AI’s intended vs. actual use: what developers intend for AI is badly dissociated from how the public decides to use it, as evidenced by applications like Google’s Gemini model.2 Third is the loss of user agency: at present, users have no say in how AI works or is built. Even self-proclaimed open source options like Meta merely give users more choice among available options; users still do not have a voice.3
Without active public participation, training data becomes stale, the GenAI marketplace becomes arbitrarily fragmented, and leading model providers remain underwater in a sea of lawsuits. The GovAI Coalition has stepped into this vacuum and begun to correct this imbalance by unifying the voices of local governmental agencies and centering public interests. This approach has several benefits. First, it is helping local governments collectively advocate for greater transparency and accountability from private sector partners. Second, it is aligning interests and presenting a cohesive front, empowering agencies to push for practices that center public engagement, resource allocation, and AI governance. Third, it is helping bridge gaps between public and private sectors, ensuring that AI systems are developed and deployed in ways that are both transparent and aligned with public values.
This last point is critical, and addresses the misaligned incentives that define today’s AI ecosystem. Companies typically focus on return on investment (ROI), often viewing people as a means to achieve financial gains, including behavioral manipulation to drive profits. For example, social media companies have created platform features like hard-to-cancel subscriptions, infinite scrolling, and push notifications to prioritize shareholders’ interests over users. In contrast, government services are oriented toward serving people first, prioritizing high-quality service delivery while striving to remain financially sustainable. The primary motive in government is not to increase ROI but to ensure that citizens' needs are met effectively and equitably.
This fundamental difference in motivation highlights the need to align incentives between private sector partners and local governments. For governments, AI must be developed and deployed in ways that prioritize public value and transparency over profit-driven motives. For companies, AI’s capabilities must be used and trusted in ways that make their business models sustainable. Bridging this gap requires meaningful collaboration and mutual accountability so that private sector AI innovations and public service objectives are able to serve each other.
A Counter-Proposal: PRAISE
AI’s technical development faces multiple hurdles that active public feedback could correct.
A) Problems of context. Finetuning works by aggregating preferences from individual consumers of model outputs. However, values like autonomy, privacy, and equity matter to us as citizens, not just consumers. They help define our relationships with each other and to society as a whole. In effect, finetuning is blind to these values.
B) Problems of use. There is a profound mismatch between the private feedback used to finetune models and the public feedback that defines the actual context of their post-deployment use. Most present day models are finetuned based on small sets of individual inputs, but are made available worldwide at once. This kind of deployment comes at the high cost of incurring societal harms through abuse or misuse.
C) Problems of disclosure. Even when the models are run or deployed, there is far too little transparency to allow for robust reasoning or explanations of the models’ decision-making. The scientists qualified and motivated to probe how these models work and interact with people are held back by the financial incentives and nondisclosure agreements of private companies. Yet understanding how these models work is important for predicting risks and garnering the trust of the public.
PRAISE presents a new division of responsibilities for AI development. Today, companies are responsible both for deciding how AI should work and ensuring it works as intended. This undergirds the present ecosystem of passive feedback. But there is another way. Because achieving value alignment with AI systems is a problem of public concern, the public itself must play an active role. In Public and Responsible AI through Societal Empowerment (PRAISE), the present dysfunctional, passive feedback relationship is reversed. The problems of context (from where, and whom, do annotations come?) and disclosure (what information about the feedback process must be externally shared and how?) are not ignored, but integrated within new forms of public–private feedback. These reconfigured relationships make the public responsible for deciding how AI should work, and companies accountable for ensuring it works as intended. This is outlined in the figure below.
Figure: PRAISE’s outer and inner feedback loops. Public clients distill a spec for companies, who then disclose relevant model features. After deliberation, the public selects capabilities to be deployed for widespread use.
How does PRAISE work? First, PRAISE integrates feedback based on public values and contexts. Second, PRAISE interprets alignment in terms of the active expression of publics’ values and aspirations, not passive aggregations of individual preferences. Rather than private companies trying and failing to represent public interests, PRAISE instantiates those interests so that companies can build for them.
Of course, there are jurisdictional complexities when identifying stakeholders. Some governmental agencies may not have functional authority over other stakeholder groups that are nested under different jurisdictions. For example, police departments often fall under city jurisdiction, while health and human services agencies typically operate under county jurisdiction. Similarly, judicial systems may be governed at the state or federal level, entirely separate from local oversight.
For CIOs and CTOs, these overlapping jurisdictions can make collaboration with other governmental agencies challenging, if not impossible, within a single unified framework. PRAISE addresses this by encouraging coordination across jurisdictions to align AI development and deployment with public values. To be effective, such efforts must bridge boundaries, fostering cooperation across city, county, state, and even federal levels to ensure consistency and inclusivity in how AI supports citizens. What distinguishes PRAISE is the continuous presence of active feedback between situated public and private AI companies.
Conclusions and next steps
AI technologies will fail if their goal is to passively mirror or represent human values. Values are not just behavioral preferences–they are creative expressions of how we relate to ourselves and other people. They are the scaffold for active human flourishing. Ultimately, PRAISE is a trellis on which public values can grow. The GovAI Coalition ought to support and help build this scaffold in order for AI to be worthy of the capabilities it promises. What Hortus AI provides is a framework and tools to ensure that our society’s AI capabilities serve the needs of the public. Hortus AI encourages CTOs and CIOs to apply this framework when considering how to integrate AI using both active and passive feedback.
—
About the authors
Thomas Gilbert is the CEO of Hortus AI.
Jiri Rutner is an Assistant to the City Manager, Homelessness Solutions Enterprise Manager at the City of San José.
Note: The opinions expressed in these articles are solely those of the author(s) and do not necessarily reflect the views, positions, or policies of the GovAI Coalition or the authors’ affiliated professional organization(s).
—
This is the blog for the GovAI Coalition, a coalition of local, state, and federal agencies united in their mission to promote responsible and purposeful AI in the public sector.
Cruise, for example, lost the trust of residents in Austin and San Francisco when its fleet began to gunk up public roads and drag pedestrians to the curb.
Google nearly lost the enormous goodwill baked into its brand when offensive outputs of its Gemini model went viral on social media in February 2024.
Meta’s remarkable rebrand as the open source option among leading AI providers is in part a result of anticipating user discontent with closed source competitors like OpenAI.