top of page
  • Writer's pictureTodd Christensen

Exploring AI For The Design Industry. Part One.

Part 1: Ethical Guardrails (& the Lack Thereof), Designer’s Dilemma & Why Is This Tech Happening Now?


FULL DISCLOSURE: Let me first ask for forgiveness and patience. Like most people I am not an expert in AI. And after a lifetime of sci-fi and capricious market malignancy my instinct on the subject is to be perhaps too alarmist and distrustful. This part 1 will most definitely come off as a bit of a rant. So if you are an expert in AI or are an AI Tech evangelist I ask that you show restraint and patience. Or ignore me. Because this first in what I hope to be a series of articles I am going to explore my instincts and biases and, yes, try to find solid justifications for those instincts. And to foreshadow for you; it’s not hard to find justifications for my biases.


I think there are plenty of billionaire-funded and amplified evangelist voices enthusiastic on the subject of AI technology. So it’s not going to hurt anyone to hear from one more tiny ignorant human and what he has to say about AI and design. Because little I say is likely to alter the trajectory of billion-dollar corporations. But I do believe a whole lot of designers feel like I do. And I intend to try to give voice to those designers. And maybe together we can make some kind of difference.


Later I hope to explore a more optimistic tone. If I can find one. I do promise to try in good faith. And if I fail at the end of this four or maybe five-part exercise, then you can let loose on me. In fact, I would love a dialogue on this. For now? PREPARE FOR AN EPISODE OF BLACK MIRROR.


Ethical Guardrails. A great number of very smart people have spent a considerable amount of time and effort thinking about the ethical use and development of Artificial Intelligence. Computer scientists. Neuroscientists. Academics. And a host of PhDs.


To my knowledge very few of them are CEOs of corporations currently developing AI.

And of those people who are considering ethical approaches important, they have either collectively or separately composed pretty much the same core value recommendations. One such person is Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO, who has said the following (Nov 23, 2021):

In no other field is the ethical compass more relevant than in artificial intelligence. These general-purpose technologies are re-shaping the way we work, interact, and live. The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.”

Why Is This Important? Garbage In Garbage Out. We live in an imperfect world. A world of conflicting interests and agendas. A world infused with racism and sexism. And as Gabriella Ramos noted:

“Without the ethical guardrails, it (AI) risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms. AI business models are highly concentrated in just few countries and a handful of firms — usually developed in male-dominated teams, without the cultural diversity that characterizes our world…”

We have already seen real world problems surface in unintentional sexist language utilized by LLM’s that produced startlingly gender-biased model results. See: 4 Ways to Address Gender Bias in AI


Where we have faults so do our thinking machines. So it seems sensible to proceed with this in mind.


UNESCO has held one global forum on AI ethics in November 2021. And is set to hold yet another in February 2024. At this first form they hammered out four core ethical values for AI development:

  1. Human rights and human dignity. Respect, protection and promotion of human rights and fundamental freedoms, and human dignity

  2. Living in peaceful, just, and interconnected societies

  3. Ensuring diversity and inclusiveness

  4. Environment and ecosystem flourishing


Sounds pretty reasonable, right? Specified under those four values are ten more detailed principles:

  1. Proportionality and Do No Harm The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

  2. Safety and Security Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

  3. Right to Privacy and Data Protection Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

  4. Multi-stakeholder and Adaptive Governance & Collaboration International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

  5. Responsibility and Accountability AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

  6. Transparency and Explainability The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

  7. Human Oversight and Determination Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

  8. Sustainability AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

  9. Awareness & Literacy Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

  10. Fairness and Non-Discrimation AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.


Again. These are completely reasonable goals. Goals that, paraphrasing the film Bladerunner, the god of capitalism wouldn’t let us in heaven for.


Yet it seems that almost no major player in the field of AI development is willing to go on record saying they will adhere to these values in any provable transparent sense. Many software engineers openly admit they have no idea what is going on inside these models.

I guess we rely on the mercy of modern billionaires to do the right thing? Right?


It will not take much google searching to find alarmist statements and tweets by evangelist AI developers like Elon Musk and Sam Altman promising their AI models will studiously avoid being “woke.” This is language du jour of culture warriors. Euphemisms exclusively used by people who conflate compassion with moral weakness. IOW dog whistle language meaning “don’t expect us to honor anything but profit.” Yes. I said it.


So there is our context. We have reasonable map to follow. One that most reasonable people would agree to (I would hope).


But very few of the so-called thought leaders forking over the huge sums of cash to develop AI seem eager to follow this or any map at all. And the market and investor class does not seem to be demanding ethical AI development. In fact the wealthiest men in the world seem totally hostile to any constraints on their whims at all. I do not know what to do with that but despair.


The Designers Dilemma. I started in this business by accident. I don’t have a design degree. I graduated with a degree in broadcasting. Hoping to work in TV and film. And through a hilarious chain of surreptitious economic and relationship catastrophes I ended up owning a Mac Plus with some layout software. And I needed a way to use it. I started by laying out posters for bands and clubs in Peak Grunge Seattle. I then worked at a Kinko’s managing a desktop publishing center. I probably helped train every designer in Seattle that wandered through a Seattle Kinko’s.


Through that crucible I learned how to learn. I devoured every new software application I could get my hands on. Every new way to use that mighty 512K of Macintosh I could find, I put to use. I say all of this to hopefully demonstrate I am in no way a luddite. I want technology. I love technology. But above all, after three decades seeing the technological impact in design I have the experience to see what technology is constructed to help my industry and what technology is designed exploit it.


So what is AI for?

Simply it is to automate, speed up, simulate and improve upon some of the cognitive abilities of humans. But why? Why is it being developed now? I will not be engaging in philosophical debates about what is or is not art. That is a huge distraction from the central reality that, as every CS scientist and economist I think of says, AI is going to disrupt our economic reality on an unpredictable scale and at an accelerated rate.


There is the less cynical reply to that question “Why?” Errors. Humans are fallible. They are slow. LLM’s and AI can analyze images like MRI and offer less error prone diagnostic results, for instance. They can often fly planes better than humans. AI can and will save lives. But this is not my purview. I am concerned about my own bread and butter.


More realistically, and to put it crassly, the bulk of the investment into AI tech is because those human abilities are a form of labor that cost an owner class money. And the market demands to reduce those costs. By existing as a line item on a budget, unpredictable humans with rights and needs are an interminable blight to profits.


But it will create new jobs, you say! Sure. New careers will evolve for sure. And they will be few and replacing millions of high paying creative work. And everyone will scramble to train for those few careers until they too are targeted for obsolescence once they cross the capricious threshold of costing too much.


You cannot convince me that the disruptor capitalist class and Wall Street are investing billions of dollars to develop a human labor replacement technology that will INCREASE their total labor dollar expenditures. That argument makes no sense. And if it happens it will happen entirely by accident. I don’t think we should rely on accidents of history.


AI is being developed now because the breakneck momentum of computer science has necessitated that it CAN be developed. If you adopt these technologies without guardrails, do we not merely hasten our own irrelevance?


Yes. First The Cynicism. What does this mean for designers? Our dilemma is that undoubtably many of these LLM/AI tools will make our jobs easier. They will streamline iteration of web layouts for instance. Image editing and some of the tedious work of copy editing or illustration production are going to become much less time consuming. But that also means if our jobs are easier how do we justify getting paid the rates necessary to live at time when living is only getting more expensive?


The Productivity Paradox. According to the Economic Policy Institute:

“Productivity and pay once climbed together. But in recent decades, productivity and pay have diverged: Net productivity grew 59.7% from 1979-2019 while a typical worker’s compensation grew by 15.8%” 

Therefore, I am deeply suspicious if utilizing more productive tools will yield higher pay. When clearly the opposite has been (mostly) true for some time. And given the outrageous inflation of healthcare and housing costs in the US, we are seeing resulting contraction in real wages. Working faster is rarely advantageous for the worker. But always advantageous for shareholders.

(Right now, you may be asking yourself “My goodness, is this Todd fellow a communist of something?” Hahaha. Not yet.)


How We Design. Much of what these tools streamline I ENJOY. I like exploring iterations in design. It’s a necessary part of the creative conceptualization process. One idea spawns a thousand more. It’s like playing scales in music. The practice builds the skills to improvise. Will eliminating the repetitious play of production not also reduce some of the joy and originality of creating? Maybe. That might be a function of the struggle to learn new methods during the concepting phase of design. But I like what I like.


Unpaid Tutors & Training Your Replacement. And there is yet another pernicious reality. Almost every software LLM/AI Tool has an ulterior motive. It’s in the end user agreements, as much as they employ euphemisms to disguise it. 


By using those tools, you are agreeing to contribute to a database that will make future tools better. Are we being paid for this? Are you being credited for this? No. Unlike your college tuition you paid for your education. Unlike the books you buy to learn, or the software tutorials to which you subscribe, the software companies developing these systems get to use your experience and expertise for free. No. Wait. You are PAYING them subscription fees to train their AI systems. That’s seems like a pretty great deal for them.


Pitchforks & Torches. Am I being unfair? Probably. But seeing the rise of authoritarian regimes utilizing these technologies and a stack of history books that prove to me prudence and caution are warranted, why should I be fair to massive corporations with every incentive to be unfair? For now, I urge you to be as critical as you can.


Should designers participate in this technology? How do we ethically? List and cite our prompts. Give credit to any artists we use as springboards, certainly. But more realistically do we have any choice but to plunge ahead?


As it stands, I do not think we have much of a choice. And I hope to learn more about choices available to us. I’d like us to have choices.


So. Now that I have riled you up, I will leave it here for my next installment.

Commentaires


bottom of page