top of page
Search
Writer's pictureJay Kemp

Taiwanese Digital Minister Audrey Tang’s “Job Description” — a guiding star for generative AI, participatory democracy, and healing online deliberation


This piece was originally published for the Reboot Democracy blog.


Taiwan didn’t always have a Digital Minister. In 2016, when Tang was working at Apple after a prominent career in open-source software and civic hacking, they were approached to become the first one — and asked to write their own job description. They did in the form of a poem I find particularly prescient to our creation of an inclusive, public AI strategy. They called it “A Job Description”: 


When we see the Internet of Things, let’s make it an Internet of beings. 


When we see virtual reality, let’s make it a shared reality. 


When we see machine learning, let’s make it collaborative learning. 


When we see user experience, let’s make it about human experience.


Whenever we hear that a singularity is near, let us always remember that plurality is here.


Can we take this “job description” as a guide for how to imagine generative AI systems that strengthen participatory democracy? We may just find Tang’s words ring true to a few key development principles, relevant to beyond just Taiwan.


“When we see the Internet of Things, let’s make it an Internet of beings.”


Changing an Internet of Things to an Internet of beings means codifying two key tenets of digital democracy: trust and security. Trust that our online platforms are built to be safe, that our data remains secure, that the people we engage with online are actually people, and that ongoing deliberation, analysis, innovation, and input can keep our online spaces secure against the latest threat. 


AI-generated muck spamming the internet, AI-powered bots filling the replies of news sources and politicians, publications allowing AI-tools to write erroneous AI-articles without human input… We need to shift focus. What would the digital commons look like with generative AI tools that engendered trust? How about increasing communication lines between representatives and constituents? Or using deep neural networks to moderate hate speech online? Can AI transcription of virtual meetings pave the way for improved social relations, better behavior, and more equitable conversations?


However, trust is a two-way street. As Minister Tang mentioned on Friday, technologists in return need to “radically trust [their] fellow citizens” – particularly, without expecting or needing their trust in return. Tools should absolutely be studied for safety risk prior to release, but limited release to the public for experimentation and feedback should absolutely be a part of that analysis. Play leads to technological improvement, improved tools create safer spaces, and safe spaces foster play. 


“When we see virtual reality, let’s make it a shared reality.”

Digital communities do not exist in silos; nor, then, should the tools we build to address them. That’s why a core tenet of digital democracy should be public, open technology – sharing information is critical to Minister Tang’s call for a shared virtual reality. I once joked to a colleague that our mission statement should be “Open Source and Open Discourse”; yet, it’s true the two are fundamentally intertwined.


Recently, a large group of civil society organizations and leading academics wrote a letter urging the current administration to prioritize of open-source AI development. Addressed to Commerce Secretary Gina Raimondo, the memo outlined the necessity of open-source AI for driving innovation and value, marginalizing risks of closed systems, and allowing for tailored solutions to problems instead of overbroad regulation. 


Open-source systems like Pol.is, a sentiment-gathering platform that’s one of many used in the vTaiwan process, allow policy officials to leverage technology in public engagement without needing an entire development team. Taiwan did not build any of the platforms they use for public engagement. Yet, by taking advantage of a global network of research connections and open-source tools, they were able to cobble together innovative processes for resident deliberation & engagement. If we want to create robust digital communities – a shared reality built on AI uses that are co-creative, rather than isolative or competitive – we need to have more open-source, public tech.


“When we see machine learning, let’s make it collaborative learning.”


I mentioned vTaiwan, which is an online and offline process designed to move ideas from public consensus to legislative enactment. The process is driven by consensus, rather than controversy; the comments and posts with the most commonality are promoted across digital communities to gauge resilience of agreement. Participants are even able to rewrite the questions themselves. 


For AI, collaborative learning has to be more than teams of developers churning out tools and updates, through privately-sourced models for elite clients. Elitist “black boxes” of data and directional AI decision-making inherently can’t be human-centered; arguably, they’re more likely to exclude safety, trust, or democratic values. What if the public vision was incorporated into new tools or proposed policies before they are ever drafted or prototyped?


The ability for average citizens to write questions themselves, rather than only answering questions written by higher-ups or system moderators, also allowed shifts in conversation your average lawmaker couldn’t predict. vTaiwan – an experiment in deliberative democracy that produced consensus-driven outputs the government could actually act on – ultimately helped craft 26 pieces of national legislation before its impact faded. I hope that when government looks for collaborators on the next policy proposal, they look no further than the citizen conversations already happening on these issues. Then, they need to use already-existing participatory tools that can foster productive discourse.


“When we see user experience, let’s make it about human experience.”


In November, I first learned the concept of human-centered design from Dakuo Wang, an Associate Professor at Northeastern University. He explained that human-centered design isn’t a novel tenet, but it is a valuable lens for ensuring AI technologies are understanding of real world contexts and user needs. Wang’s example, based on his research: many doctors don’t want AI tools that can make diagnoses for them, but tools that significantly free up their menial and tedious tasks so they have more time and attention for thoughtful diagnoses. Said differently, healthcare may not be improved by handing over the reins to algorithms, but by meeting the needs of both user (doctor) and affectee (patient).


The same issue presents itself in online deliberation. Current algorithms are created for engagement with content, showing users personalized feeds designed to keep them looking for advertising money. Stanford University researcher Tessa Forshaw argued that this traditional algorithm system, detaching programmers from the individuals most impacted, can “produce out-of-touch products at best or reproduce inequities at worst.” While leading to excessive screen time, anxiety, and feelings of inadequacy, such algorithms also create online echo chambers that drive polarization, limit exposure to diverse ideas, reinforce bias, and reduce effective deliberation. 


What if AI was used to generate algorithms for engagement with community, rather than content? By demystifying algorithmic design to the public, prioritizing commonality over controversy, selecting data while mindful of bias, continuously refining for shifting contexts, incorporating the matrix of diverse users and needs, and scrutinizing impact upon release, AI developers can produce participatory tools that are human-centered, rather than capital-centered.


Whenever we hear that a singularity is near, let us always remember that plurality is here."


The final tenet of Minister Tang’s poem is also a strong reminder that democratic solutions should never be purely owned by one individual, one team, or one project. At the end of Friday’s conversation, which can be viewed here in full [embed link], one of our AI for Impact co-ops asked Minister Tang about their metaphorical “sticky note” of future ideas or projects for digital democracy in Taiwan. Instead of sharing a laundry list of platform developments or research partnerships, Minister Tang spoke to their own conception of the “good enough ancestor,” which should resonate for all scholars of participatory democracy. “Instead of writing too-good systems that can’t be changed,” Minister Tang explained, “we [should] leave a crack in everything that lets the light get in.”


Plurality and participatory frameworks essentially hold space for the naysayers, deliberation space for those who verbalize disagreement – in Minister Tang’s case, sharing an anecdote about responding to a 92% approval rating by thanking the 8% who disapprove, those that keep us honest. However, the “good enough ancestor” also acknowledges the shortcomings of human ego by refusing to lock policy in impenetrable bureaucracy. In this case, I see the light as those who may come along later with better ideas or understandings, technologist or otherwise; the crack, the mechanisms we codify for refinement, alteration, or even complete overhaul as needed.


Our AI policies need to be led by “good enough ancestors” who acknowledge the vast potential we can’t possibly foresee. Overburdensome regulation or innovation-killing capital systems could impede future technologies that ameliorate society, our democracy, or our online communities. Cementing restrictions, rather than values, will only tie our hands. If we reject singularity – just one Congressional package of AI bills without follow-up, just one generative AI model for all cases, just one taskforce dedicated to writing the rules – we have a better chance of co-creating participatory, adaptive, and ultimately impactful AI tools, for the public good.


18 views0 comments

Comments


bottom of page