thing 9: GenAI and Creative and Intellectual Work

In what ways does GenAI challenge traditional notions of authorship, originality, and expertise?

Read

You will compare and contrast these two readings. The Guardian article highlights how some creatives and academics are rejecting AI tools, while Seth Godin argues for a human-centered embrace of AI.

‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI- at work and at home

Productivity, AI and pushback

Discuss

What is lost and what is gained when we let machines help shape our words, ideas, or images? How should we navigate the ethical and professional tensions between rejecting, embracing, or adapting to generative AI?

26 replies on “thing 9: GenAI and Creative and Intellectual Work”

I first read The Guardian article, and afterward I felt panicked at the thought of GenAI taking over and humans having no purpose – or as Justine Bateman stated,”…essentially become just a skin bag of organs and bones…”. Then I read the blog post, ‘Productivity, AI and Pushback’ and realized we have evolved over centuries with the onset of a lot of new technological advances and we humans are still here. If we can harness AI to work with us and not against us – or harm us – we all can/will benefit and thrive. Now it is incumbent on all of us to work together to make that happen.

I am 100% in agreement with the second article. The first author is concerned, among other things, with AI being used in Russian warfare against Ukraine. I oppose any warfare Russians are using; the root of the problem is not in the tools… And, do you like it or not, progress can’t be stopped. Nobody likes nuclear weapons in the hands of dictators or terrorists. The antidot is having your own. This is unfortunate, but that’s how things work. Same with the AI. Some jobs unavoidably will be eliminated but new professions will emerge, quite unexpectedly. Luddites can’t win.

The first article voices legitimate concerns and the fear of letting AI take over, taking away the human aspects of our day-to-day lives and interactions. The second article is all almost saying – you do not have a choice so learn to live with it. I believe the way forward should be in the middle of those two. I do not think we can ignore AI or avoid it but we can control how it evolves and how big of a part we want it to have in our personal and possibly even in our professional lives. We should do some serious weighing of risk and reward and not mindlessly rely on AI. I look at it as a crutch, a helper, an assistant to do the busy work so I can be the creator. Our environment has already and will continue to change, we have to find a way to thrive in it and influence the development so it gets better not worse.

Gosh I don’t know where I land on this. I think this year I’ll try to more actively explore AI tools and see what I think at the end of it.

The discussion question got me thinking about an episode of Outer Limits from 1997 titled “Stream of Consciousness.” The premise was that humans developed implants that could give them instant access to information, but not the knowledge behind it. On man, who was seen as less than b/c he could not get an implant, was slower than everyone else. The ‘system’ is eventually shut down and now the man who was slower than everyone else because he actually had to read and learn things, ends up becoming the one to teach people how to actually learn again.

AI can both help us think more concisely/clearly but also help us forget how to think more critically by just accepting outputs at face value. Like the episode of the Outer Limits referenced above, we still need to actually learn how to read, write, and research, as well as debate, analyze, and critique.

I think AI can be a useful tool and as such we need to learn to use it as a tool; it is not the ‘end all, be all.’ I am more interested in what humans have to communicate through art, books, conversations. One way to make sure AI is used a tool is to always (all media) indicate when AI is being used.

I had a thoughtful comment, and then WordPress logged me out. So, my original was lost.

In short, AI is a tool in the toolbox. However, many companies view AI as being solution to paying humans to do “unnecessary” tasks, like writing, singing, creating art. It should be used to enhance the hard work, not replace it.

The second article doesn’t account for the fact that we are in an age of automation bias, something I don’t think the other technological revolutions necessarily encountered. Automation bias, left unchecked, will lead many to ceding creativity, nuance, and critical thought because the machine is right, right?

However, one area I think will explode is philosophy. What is creativity, nuance, and critical thought in the age of AI? And I don’t think AI will be able to enter the conversation without the humans thinking of these issues first. How else will AI be trained on this debate without the humans writing the papers, discussing it in forums, etc.?

For me this comment is unnerving, by Royle, …’eventually nobody will need to know anything.’ But in the article, “Inclusivity in Generative AI Should Be an Attribute.” I see where the advancement and technology could be a useful and helpful tool. Specially for persons with neurodiverse, anxiety and other issues. Persons with disabilities have much to offer and genAI, could possibly be helpful in pass on their ideas and thoughts.

Reading for me is a time of luxury. For yourself or with a child. It is a time you share special moments, you teach sounds, such as laughter, sadness and excitement. And builds intimacy by touch and other human emotions. Some of best memories for me are reading to or being read to by my children. Reading helps to create ideas and imagination!

I think both articles made interesting points. A big issue with the rapid increase of GenAI from my perspective is the feeling of loss of agency. We desire control over words, ideas, and images and it can feel out of our control when people turn to GenAI to do a job that people can do (just not as efficiently). While automation and new technologies are helpful for productivity, value isn’t inherently connected to how efficiently things can be put out. I’m reminded of a time a previous colleague kept trying to have GenAI produce a graphic that kept resulting in errors and typos. While the tool quickly put out a product, it wasn’t high value.

Additionally, I keep coming back to the Nuremburg funnel concept. I think it is important for us to learn and struggle and experiment. One of the most exciting parts of my job is the fact that I get to learn how to do new things all the time. We stand to lose that at least to some extent if we don’t critically examine adapting to AI. I think it can have a time and place, but we need to take special care as we navigate how we want to utilize, embrace, and adapt to GenAI.

I was struck by the last sentence of the second article, “Either you work for an AI or AI works for you.” AI is here and is probably here to stay. We can avoid it, or we can embrace it, or we can be somewhere in the middle. We can try to shape it, or it will likely shape us. AI will have as much impact on society as did the printing press and likely will change society in ways we can’t fathom. Knowledge is power. AI can help us harness knowledge. I think we need to be well-informed. It’s a double-edged sword.

Ok first of all, the blanket statement that “I also think that people are individually better off if they don’t use them.” from the first article is wild. This is factually incorrect. Do you know how to interpret medical codes on an itemized hospital bill? Because I don’t. They don’t teach that in high school or in my marine science courses. But with ChatGPT, I was able to copy and paste these line items and ask it what it was for. Instantly I had my answer but I wasn’t satisfied because I hadn’t received the service it listed. So I told it that and it said that sometimes hospitals can use these codes for medicine, but it gave me a script and specific things to ask if I want to call or email the hospital to clarify and/or dispute the charges. I work two jobs, I don’t have time to dig around on Google of this kind of stuff (now Google AI puts stuff at the top anyway, most of which hasn’t been what I wanted because Google searches are vague).

The concept of us being destroyed by AI is also an extreme take. Haven’t these people seen Black Mirror? It all depends on how you use the tech.

Personally, I’m a big fan of the 4 day work week (the history of the 5 day 40 hour week is very interesting if you haven’t looked into that), so I think the 2 day work week that they claim will happen is asinine. But also – so what? More time to enjoy and live your life rather than squander it away at a job just to have money to survive on? Doesn’t sound like the worst possible outcome to me. How are we going to lose our human connection if we have three more days a week to spend with our friends and family?

For the second article, I’ve said the same kind of comparison for the microwave. However, it is a tool of convenience- which many of us have no choice but to use when we have a short break to heat up food before we have to go eat it in front of our computers while getting more work done. Spell check? Same way. Until we as a society (or university) start prioritizing living over productivity, these inventions of necessity will continue to spring up. Cars? E-bikes? Same concept. Unless we move towards people over productivity *in companies*, don’t hold your breath on tech slowing down. It has to be supported by those with actual power though.

Both of these articles feel more speculative and hyperbolic then grounded in what is actually playing out currently. Some of that is the nature of the world in 2025, but it doesn’t help anyone to have that conversation about how we navigate using AI and the way in which technology changes our understanding of the world.

When I was in graduate school I remember reading about how dramatically trains altered the perceptions of reality for those living through the first years of their rollout. But within a very sort period of time, people adapted, society changed, and most folks moved on with their lives and no longer thought about how their lives have changed or the tradeoffs inherent with that spreading technology. That really resonated with me, as it reminded me of how much had changed with the spread of the internet (which we first got at my house when I was a freshman in highschool) and how quickly life adapted to new realities.

I hope we are able to find space to have these conversations about how generative AI is changing what we know, what we expect, how we frame questions and look for solutions and how we center humanity amid all of that, before we just move on to the next phase of life. Personally if we can reduce the errors, I would appreciate using generative ai to help synthesize information and reduce some of the busy work so people can devote more energy to things that really benefit from humanity: things that are creative, involve ethical decisions, and solve problems.

We’ve all had (sometimes funny, sometimes embarrassing) mishaps with autocorrect/complete. I feel fortunate to have had a fairly developed brain and robust set of experiences as new technologies, e.g., spell check, search engines, a modern OS, the world wide web, texting, social media, tablets, and now GenAI, became ubiquitous. So I have gained enormously. I simply cannot imagine going back to doing research without google scholar, sending e-mails with file attachments instead of using a cloud service, programming without git, writing without spell check or autocomplete, communicating without text/video/file attachments, and now taking notes and typesetting without GenAI. GenAI possibly further improves on all of those productivity gains. I’m not sure navigating the ethical and professional tensions regarding GenAI is much different than for any other technology. There have always been ways to cheat, lie, steal, and violate the trust of others. Some people make bad choices and pay the consequences. I think most people know what is generally right and wrong; it’s just a matter of dedicating ourselves to what is right (if that’s the kind of person you want to be).

Wow! Two very different perspectives. The first article was a downer, but made some good points. Lost: trust, jobs, real human emotion, originality. On the other hand, the second article reminds us that we have lived through many advances in technology and have gained productivity and time to do more meaningful things. We have to look at the pros & cons so we can come to an understanding of how best to use AI and how to use it ethically.

This reminds me of the controversy around personal computers in the 1980’s. Today, like back then, there are varying opinions and the degree to which it will affect the future of human creativity and independent thought. Some people think it’s going to be the end of this or that, and others see it as a revolution, and some of us are just on the fence observing what we can to make an informed decision down the line. Regardless of the future, it’s going to require human input and review at many different points. Part of the future of AI will depend on independent fact checking as well as public demand. As people express disinterest in things like AI voices telling a story, the producers of such will be less likely to see it as a viable option to make money. There are other potential capitalist consumer factors that will impact the future shaping of AI use, but that’s one that was particularly of interest to me in the article, so I used that example. None of it is the end of all independent thinking: Responsible use and performance evaluation of output will always be an important factor for positive progress in AI going forward.

The second article gives the feeling that likely prompted the creation and allure of AI: that productivity matters above all else. There needs to be some balance between the stances offered in these articles. If all we care about is productivity, then what are we (as instructors) telling our students, especially regarding academic integrity? Sure, it is more productive to use a tool to generate information, but ethical engagement with the tool takes time and attention.

I might be a “decel”– I’m really okay with things slowing down. I resonated most with the perspectives in the first article. I was on vacation last week and my sister-in-law used Chat GPT to turn a photo of my niblings (gender neutral term for nephews and nieces) into an illustration– it was really cute and captured elements of their personalities. Then, her brother got really mad at her for using Chat GPT because he sees it as a glorified plagiarizer that steals the intellectual and artistic achievements of humanity and repackages them blandly and for free, hiding the environmental cost. But, my sister-in-law was never going to hire an illustrator to make this image and the outcome was really valuable to her. So, I think I’m in the middle of these two perspectives and am adjusting to the reality that GenAI is likely to stick around for better and worse.

AI is a tool, and creativity can be enabled, not replaced, but tools. Productivity (perhaps not synonymous with value creation) will lead to wide-spread adoption of AI, but I am still optimistic that human creativity will discover ways to leverage the tools and not avoid them.

I deeply appreciate the ethical questions found in creativity, authority, expertise in the implementation of GenAI. It is so important that we have these conversations now. However, I think that fear is seldom the best guide, won’t help us have a better conversation about GenAI and will not lead to outcomes that are desired or effective; fear is not the most appropriate response to a tool. AI is not human. We need to remember that and hold the idea that part of that knowing, demarcating the difference between human and AI, is learning how to apply and use the tool effectively.

Viewing AI as a tool and not as a means of replacing humanity is important here. The two articles present vastly different perspectives as to how AI is shaping our experience. Even though I do not use AI on a regular basis, it is a handy tool to have around when facing certain situations. It is so new that we haven’t figured out exactly how to grapple with it. I feel like similar feelings were evoked when social media was new, and we are still figuring out its harmful and good effects on society and interpersonal relationships. AI is here, so we may as well learn to use it responsibly.

Two quotes jumped out at me from the Guardian article that really underscore a concern of mine, that using Generative AI in creative endeavors is really just kickstarting a process of producing more bland, generic crap. One noted that AI will only produce “more of the same,” while Emily Bender pointed out “I’m not interested in reading something that nobody wrote.”

Both of these are so crucial. A world of AI-derived outputs sounds so generic and so gratingly boring.

AI is not the only driver of this, Seth Godin’s blog really overplayed his hand when he said, “Productivity wins out.” What an unbelievably crass statement, but at least it pulls the curtain back from the wizard. As with so much in our economy, quality doesn’t *really* matter, it’s an abstracted notion of productivity. Things can be kind of crappy, work can be repetitive and meaningless, and art is only valued as a commodity. The value of everything and the worth of nothing. AI is a product of Techno-Capital that’s primary duty is to generate returns, that used “productivity” to obscure the process of driving up profits independent of the quality of the product. Think about the assembly line of Marvel movies that finally flew too close to the sun, and generated audience backlash by cravenly selling the same financialized, tested-to-hell product again and again. I think this is the fate of AI and creativity. There will be some people that can channel it for exciting ends, but most of the outputs will be bland and so, so boring.

April Doty expressed my thoughts on AI — “But, more and more, we’re surrounded by it, and there’s no off switch. ” In the apps and basic work software used everyday, AI is pushed on the user. Open up on application, immediately notices popup with Use AI to do x, y, z. I have no fear of AI, it’s useful for many situations and work. The over use of it. The industries and companies that seek to replace human creativity and culture with AI so they may cut down on expenses, and increase their wealth I find unethical and frustrating.

Seth’s blog closing — Either you work for an AI or AI works for you. I am 100% in the AI works for you camp.

Seth is right—either you work for AI or AI works for you. It’s here, so now the question is how to use it. One of my biggest concerns is the way it makes things up, and its tendency to give users what they want. But then again, we’ve had humans making up stuff for years, too.

I think the Guardian article quite accurately reflects my POV. I am all for smart appliances or productivity tools, and I am happy to use AI to write emails or reports that don’t matter (arguably, we shouldn’t have had those in the first place). But my nightmare scenario is that the majority of the population will loose access to training on how to think independently, how to analyze information independently, how to create things, and will be left on the mercy of a small elite who will be trained and will have all the controls.

I love to read. It is my happy place–my great escape. I have seen some of the types of “books” that AI generates, and they are boring and predictable. I don’t think there will be a time when real authors, narrators, illustrators, etc. are not in demand. I do worry that we will begin to depend too much on AI, but at the same time, as the second article pointed out, many, many things have changed over the years, and have ultimately been beneficial. It will all depend on how the AI is handled, used, and regulated. I find it concerning, but at the same time realize that it isn’t going away, and I need to be able to adapt and have it work for me, while not allowing it to cause me to become lax in my own thinking, writing and creativity.

AI can expand possibilities, break through writer’s block, and democratize tools once limited to specialists. However, what’s lost is a degree of originality, personal voice, and the assurance that work fully reflects human experience rather than algorithmic patterns. I fear AI will not be able to perfectly mimic the way we write/communicate. And original (or even silly/bizarre) ideas could still only be made by humans

Leave a Reply