155

 avatar
unknown
plain_text
a day ago
63 kB
9
No Index
Justin:
[0:00] Disney and Universal's Sumid Journey. That's pretty recent. It hasn't gotten more yet.

Jay:
[0:05] Here he is.

Kay:
[0:06] That's wild. Hello.

Justin:
[0:07] The boy's butt.

Jay:
[0:08] The boy. Arthur, be polite. We have guests. No, don't eat cables. You have food down there.

Kay:
[0:17] But cables are yummy.

Justin:
[0:18] It's a spicy, spicy Twizzlers.

Jay:
[0:21] I can't eat cables. They're bad for you. Arthur.

Justin:
[0:25] What is the new New York Times copyright? Right. Dude. Cause that was like a couple

Kay:
[0:31] Of years ago. And that was like about news information specifically or like, but I feel like there was some kind of claim like that there was some judgment.

Justin:
[0:39] Oh, did it get put together with the anthropic one? No. I see something from May that said it was allowed to go forward, but I don't, I'm not sure if the anthropic settlement, like actually, I guess they, I guess they let it go forward. I'm just hitting paywall after paywall.

Kay:
[0:55] Yeah.

Justin:
[0:55] Because the judge rejected it September 8th.

Jay:
[0:58] Welcome to the Library Punk segment of looking up case law.

Justin:
[1:03] Yeah. Or looking up news.

Sadie:
[1:05] Listen, as we all sit in silence.

Jay:
[1:07] Yeah for

Kay:
[1:08] Somebody who cares about ai i'm like i don't follow in these lawsuits like something's gonna happen i guess.

Justin:
[1:14] I get stuff through google alerts that's mostly how i keep up yeah um but i'm trying to follow like specific things like ai use in academic journals yeah you know stuff like that yeah stuff that's niche and then every once in a while like a term will get used and then it'll ruin the google alert so like i was using like ai and libraries and then the injection into code libraries was happening whereas making fake code libraries so that ruined that google alert so

Jay:
[1:44] Arthur keeps gazing out the window like a fucking like whaler's widow or something he like he'll like look upon and he's like look out the window right when will my husband return from

Justin:
[1:56] To see i made him this cable knit sweater

Jay:
[1:58] I've still not seen any of the houses around massachusetts that have widow's watches on them but i know they exist there it's like a specific feature of new england houses where there's like a a place or like a almost like a plank i think or like an area where like the a wife could like go out and it was on top or at least high enough so that they could see the coast um because it's like in coastal towns and it was for like whalers wives and other sailors' wives and stuff. So yeah, it's called the Widow's Watch.

Justin:
[2:30] Wives and boyfriends.

Jay:
[2:32] Yeah. But I haven't seen any in Bastion. I probably have to go down to New Bedford for that or something.

Justin:
[2:38] Yeah. Okay. Well, then I don't really have a segment. We'll just jump straight into the article.

Jay:
[2:43] What if I just start talking about Moby Dick?

Justin:
[2:45] Yeah. All right. I'm Justin. I'm some kind of academic librarian, and my pronouns are he and they.

Sadie:
[3:20] I'm Sadie. I work IT at a public library, and my pronouns are they, them.

Jay:
[3:24] I'm Jay. I'm a cataloging librarian, and my pronouns are he, him.

Justin:
[3:28] And we have a guest, which likes to introduce yourself.

Kay:
[3:31] Hi, I am Kay. I'm a public library worker based in the Chicagoland area. My pronouns are any of them.

Jay:
[3:37] Hell yeah. Let's fucking go.

Sadie:
[3:41] Bitches do love anime. Get that bitch in anime.

Jay:
[3:45] I was so confused. I didn't see Sadie's mouth moving.

Justin:
[3:49] I found that drop from like, I don't know. I had to redo my soundboard, right? Because I got a new computer. And yeah, I don't know when we got that. But there it is.

Sadie:
[4:00] I don't remember saying that at all. But it definitely sounds like something I would say.

Justin:
[4:05] Almost exactly like something you would say and have said it.

Sadie:
[4:09] I'm trying to get a picture of my dog pouting just beyond my desk while I record because I trapped her in the room with me. It's gone in the Discord.

Kay:
[4:18] Beautiful.

Justin:
[4:18] So welcome back, Kay.

Sadie:
[4:19] Thanks for having me.

Justin:
[4:21] Third time guest, technically. You might have got picked up on the live show.

Kay:
[4:25] I think so. Yeah.

Justin:
[4:26] Because we handed you a microphone, didn't we, at some point? Yeah.

Kay:
[4:29] I saw the transcript, but I just don't. Like, I'm not a little part of me, but I did, yes.

Justin:
[4:35] It sounded great.

Kay:
[4:36] Oh, thank you.

Justin:
[4:37] Yeah, I'm still surprised it came out as good as it did. But that's having a good microphone for you. USB mics are tough.

Kay:
[4:45] I got really confused.

Justin:
[4:48] The whole episode came good.

Kay:
[4:50] Yeah, the room is good.

Jay:
[4:51] It's like, no, we're surprised you in particular.

Kay:
[4:53] Like, hey, no.

Justin:
[4:56] So you've been working on a lot of things. We met up at ALA and you were showing me some of the stuff you were working on. But you've been working on this paper in Library Trends, Critical Refusal in the Library. Against AI, Critical Refusal in the Library, which will be linked in the show notes. So my first question is, how did you start writing the paper and why choose to talk about AI?

Kay:
[5:19] Yeah, I will clarify for listeners too. I'm recently out of library school. I finished in December of last year. So I did, yeah, so I did the ALA Emerging Leaders Program. Yeah, that was pretty good. We did a poster at ALA and we talked about volunteerism and core specifically.

Jay:
[5:36] Don't worry, I did it too back in like 2018.

Kay:
[5:38] Yeah, it was an experience. I hope they do it again. Yeah, I met a lot of great folks through that program. So I did a lot of post-grad professional development work, did that program, and then I rolled right into doing the junior fellows program at the Library of Congress. It was very fun. Everybody was really chill there. So I've been doing a lot of stuff in the last two years that has been engaged in projects. But before I was even into libraries, I did a master's in communication at the University of Illinois, Chicago.

Kay:
[6:13] My track before libraries was really going to be like communication studies, media studies. I did my master's thesis on deepfakes that provided me with a lot of context for this work. I mean, just like the background understanding of what AI is and like what's going on in like computer science. At the time, this was probably 2019 to 2021.

Kay:
[6:33] So COVID happened. I applied to some PhD programs, didn't get into the ones that had funding. And I was like, I don't know what I'm going to do. But I obviously like doing scholarship.

Kay:
[6:44] So I took a little bit of a break from doing that, start doing library work. And then once I got back into school and learned about how there's like a whole discipline of information studies and like history that I didn't know about before. Okay maybe this is like i'm a little more aligned here in terms of scholarship and just my interest because deepfakes the study of that communication was a lot more about like how are people experiencing deepfakes what are the broader implications for politics like understanding speech and language and like that's fine i think that's important work at the time it was definitely like a.

Kay:
[7:21] Sexy topic to talk about but at a certain point i kind of hit a wall just in terms of like all of the solutions to these problems were like regulation things were going pretty slow in terms of like laws people understanding certain things illinois has been pretty on the cutting edge of a lot of stuff but illinois has passed a couple of deepfake related laws like since this is all kind of began a couple years ago and i'll provide context for that too like a lot of that this was before deepfakes were like part of the mainstream so this was when like they had like that tom cruise deepfake or like the jordan peele obama impersonation one so it was a pretty good experience but i kind of got scared away from professorship like i was like i don't know if it's a path for me i don't know if i really wanted to teach so i just felt like a bunch of people doing information studies work on social media i saw the special issue call for library trends of the budget of AI.

Kay:
[8:14] And I was like, I have a lot of knowledge that could be useful here. And I've noticed that a lot of people do in professional development trainings or just talking about AI, a lot of the stuff coming out of organizations was very positive, maybe even neutral about AI. And I was very confused about that, right? Just because that was not any of my experience studying that work before, especially communication. Everybody was very, like, aware of harms, impacts, the violence of it all. So I was, a lot of it was just me approaching it like, you guys being for real right now? Like, this is really what we're being positive about? Like, I was very confused. So I think the paper goes into a lot of touch points about why I think AI is harmful, just kind of on the surface, which we'll talk about probably later on. But I'm just trying to kind of coalesce all of the discourses, to use a fancy word, that I've seen about AI that are seen professionally, like in, you know, ALA or other sort of places, as well as like just online on social media, between the people being like critical. So, yeah, that's kind of where I came into it. It was kind of a, it was sort of like a, I thought I was out, but I got pulled back in situation with AI. It wasn't really like, it kind of chose me, I guess. Yeah.

Jay:
[9:26] I'm curious if, especially relating back to your previous work on deepfakes and how that relates to specific types of AI, I was wondering if you could maybe talk about the distinctions of the kinds of AI and what gets labeled as AI and how that affects this discourse.

Kay:
[9:47] Yeah, definitely. Something that I learned through doing that research was there were particular applications that were being used in like hobbyist or niche internet communities that were essentially just like importing different videos into a software application.

Kay:
[10:03] Those things being mashed up in a particular way, whether that way is like constructed through algorithms and to put out some kind of output that that's when the videos would kind of look a little wonky where you could tell there's a lot of differentiation between like space and someone's body a little more like, I don't know, just didn't look as seamless before. And then there, but then there's also this other camp of things happening where, There were more people who were doing specific training of like generative adversarial networks that was on a much larger scale than just like a smaller application. So I think a lot of these things tend to be machine learning. They tend to be really just like automated systems versus like things being trained in a network and then things being imported into that network to then create a different output. And I'll also really add to that that training is not automatic through just code. It's the labor of people creating those things and training those systems. So it depends on the scale, definitely, of where things are happening. But a lot of the applications that are commercially available are definitely operating on a massive scale that involves an immense amount of outsourcing of labor to people in the global south, etc. So yeah, I mean, a lot of audio transcription is really just processing frequencies, depending on how much that is really generative.

Kay:
[11:18] It just depends on the application. but when it is used when the material is used to train to create other outputs versus just like adding effects those are two different situations but it's really complicated and you have to like know a lot about like how computers work to understand like what the differences really are right.

Jay:
[11:36] Like i know there's like a lot of ocr and transcription software now that it's like oh we're fancy ai now but really it's just because it's pattern matching like algorithms and you have to train it on specific types of things, but that's different than I want you to output this brand new thing based on this library of data that consumed a lake somewhere.

Kay:
[11:59] Yeah, definitely. It's a lot of like the difference between like the computer understanding the edges of shapes and color versus something like keywords or tags being attached to something based on a certain input. Those two things together creates a different output, like the letter that is more generative AI.

Jay:
[12:17] Yeah, I think those distinctions are important for library workers to know as different types of tools use AI as a marketing term. And like, what is this thing that it's actually marketing at me?

Kay:
[12:29] Yeah, so much of it is, it's a sexy term to use. Some companies are using it to pitch new services to people. I mean, this is happening in a lot of our, not really my immediate experience as a public library worker, but like our in the collective library workers here. Just vendors like attaching AI to things with really just doing audio transcription or it's doing OCR. So it requires us to look deeply at what these contracts are and just say like, okay, what is this really even doing? Emily Bender and Alex Hanna's book, AI Con, is really, really great for just like thinking about approaches to all of these things. They're also really nice people. But they are just, like, really into breaking down all of these technologies in a way that it does the whole, like, you know, this is all math, sure. But, like, what are the implications of this being math? Like, what does it mean for it to be algorithmic, et cetera? So I really recommend that book for people. It's a really nice read. It goes through different spheres of work, too. It talks about healthcare, it talks about journalism, it talks about, like, business, I think, even marketing, too. Yeah. So I recommend that if you're like, this is kind of a lot of technical information and I am needing some kind of like friendlier read that is from experts.

Justin:
[13:43] It's also harder to keep up with what the technology stack is with some of these as they become products, because like there's layers of software like GPT-5, I think it like uses an LLM to choose which LLM to use. Like there's like layers and layers of recursive compute and so layers like you know i think some of the audio editing stuff that i've used will do just voice recognition but then run it through like a gpt and then go back and then change the audio to match the words that thought it heard so i was also i was on another podcast and he the guy has like does basically all of it automated and just like random fragments of sentences were just popping up from like a third speaker who wasn't there. It was just like creating audio fragments based on things that it thought it heard. Very wild and he didn't remove them which is strange.

Justin:
[14:42] But yeah, it's even for me, like I try and keep current on this, but the amount of layers that gets thrown into like, if you use like Copilot and just type something in, it's trying to obscure what it's doing. Like it'll show you, oh, it's thinking, but it's actually like, it's going to use this technology to do this task. And it's going to use this technology to do that task. And it's actually like four or five different things going on. It's just calling all of that AI. So it's even more difficult. like you talk about GANs a lot which I feel like people don't talk about enough I was just saying the other day it's strange how people don't talk about GANs in copyright because the way I've had GANs sort of explained to me for image generation is basically you just keep statistically guessing what test image is until you've statistically guessed what it is so you've done basically copyright infringement by algorithmic cheese grater yeah

Kay:
[15:37] It's like the on.

Sadie:
[15:39] Keyboards you hit enough hit enough output you're gonna recreate something right

Kay:
[15:44] Something that i noticed when i was doing my master's thesis was like i was focusing specifically on deepfake porn and i was looking at a site called Mr. Deepfakes, which now I believe is default. I think the person who ran it got caught or something. But basically I was looking at the performance of race and gender on this website and frequency with which the, it was mostly at the time it was like Elizabeth Olsen, Emma Watson, like a lot of these like white actresses being transposed onto bodies of Asian sex workers or vice versa.

Kay:
[16:16] And differences of those things being like really based on skin color, because the GAN only understands like pixels and color. There's no like possible way for, at the time at least, for there to be any kind of contextual understanding. And definitely talked about like the violence of like, you know, the sex workers not being compensated for that, like exploitation and just like them never knowing that those videos are being made use in their bodies. And I kind of found that like a lot of the website was, I was tracking sort of like how popular are these videos? Like what are people talking about on these videos? And a lot of it was people in India A lot of the videos were Bollywood actresses, which I didn't expect going in because all the popular videos were all white actresses. And it was just a particular experience that I think was helpful in me understanding what the limits of a technology are, at least at the time. And I don't know exactly where deep-baked technology stands at the moment, but at least I know that people who are making those videos obviously didn't care about exploiting anybody, which is horrible. But yeah, so it was just like a lot of, it was just kind of a lot to watch all the time too.

Kay:
[17:23] Like, okay, like, because like, I don't know, Elizabeth Olsen was doing like some Marvel. It was, that was when WandaVision was out, I think. Like, oh, hey, I'm seeing this woman all the time. Yeah.

Jay:
[17:34] So. I remember seeing a lot of the ones with the actress from The Office. Her face was used a lot.

Kay:
[17:41] She was used and AOC was also on it a lot, which that was also kind of, I don't know. I had a lot of like advisement from my people I was like working with in my department just being like well think about like the political ramifications of somebody like AOC like being depicted and I was like I don't really like you know she has the capital I don't know like these women are just like they have no compensation for what they're doing like or what's you know being displayed how they're being displayed rather on these videos so I kind of felt a little jaded after the whole experience of just like when I realized at the end was just like regulation or like things being like people having to self-identify is like the video being deep faked or some kind of encoding thing happening. Like, I was just like, okay, I don't know if you are. But yeah, so that's my little deepfake story.

Justin:
[18:29] Yeah. So in your piece, you talk about a piece that you were writing before this article, which was your piece that was about, quote unquote, AI literacy. I know it's in the article, but can you tell us the story of that editorial journey you went on?

Kay:
[18:47] Yeah. So that was my colleague and I, a co-worker collaborator, Claire Ong. Her and I have been doing a lot of programming about AI at our library and doing like a public scholarship to some degree about trying to get library workers interested in critical AI. Her and I decided to pitch something to Professional Association magazine, and we wrote really about impacts, harms, etc., thinking about kind of what I ended up writing about in library trends, but trying to just elucidate for, you know, the public of this association. Here are people in computer science talking about why AI is harmful, like here are the resources that you might want to think about, direct to the source, essentially, and kind of synthesizing those things down. And then we pitched it as, I think the title is something to do with taking critical AI seriously or something like that. It's something to do with really naming that as a thing. And then we got the editorial sort of feedback back. And thankfully, none of the writing got adjusted, but they were like, oh, we're going to call this AI literacy, something inspiring, engaging, empowering. And I was like, I don't know why we called this this. I was really confused. Because I emailed the editor and I was like, I don't think this AI literacy is not a thing.

Kay:
[20:07] A professor of mine at UIC, I think she's now at Michigan, Kashana Gray, is writing this piece about synthetic literacy. She tweeted about it and she hadn't read anything about it yet. And so I tried to sort of reconstruct this to this editor. She was like, I don't know about all that, basically, and just didn't take my... Change but yeah it was really it was very interesting to see the the willingness to sort of call this something that it wasn't at least in the way that we had made legible like literacy is like they talk about like in the article it has to do with like comprehension understanding reading something synthesizing it in your own sort of self um or with others and that has particular meanings understand what those meanings are through ingesting that and observing it.

Justin:
[20:54] And it's different from... Yeah, you bring up Stuart Hall.

Kay:
[20:55] We love Stuart Hall. Yeah. Encoding, decoding model, definitely. Just thinking about the difference from that kind of understanding of literacy versus just reading code or understanding what computer systems are doing. And there was a lot of conflations of the two, and there still is, I think. But we were talking just about social impacts and environmental impacts. What's going on and what's been reported and studied. And talking about Timnet Gebru and getting fired, like, heard whistleblowing and whistle on Google and stuff like that. And it was just, it felt really separated from what we had thought it was. But they didn't really adjust any of the writing. So we were like, okay, like, if they're not going to take that at, like, title change, at least people reading it will still get what we put out there.

Justin:
[21:40] Is there anything that we could call AI literacy or would we prefer to call it algorithmic literacy or AI comprehension or something else?

Kay:
[21:49] I think it was you, Justin, who tweeted AI comprehension like a while ago or something. And I was like, that's what that is. Because it's just like understanding what AI does. It's just like a functional thing. So I think AI comprehension, algorithmic literacy feels to me at least a little more like understanding how code is working. But it just depends on the context, I think. Because AI itself has become this sort of packaged thing that is separate from the code. and most people are really far removed from the like, back end of things i think it's difficult to say that it's algorithmic literacy at least to me but i like i'm really looking forward to dr gray's article like our chapter coming out i don't really know that about it but she says it's synthetic literacy so to me that feels like an understanding of like how when i talked about like deep fake videos like understanding like what is happening in that situation like understanding that there is a space onto a different body and like the speech is altered and like like viewing that kind of like alteration understanding that as such.

Justin:
[22:53] Yeah, I know I've complained about AI literacy as a term, particularly because what I hear at work is employers, I probably just read something recently, it's like employers want employees who are AI literate. To me, that's like, well, that you could just have that without ever using an AI, like, like, everything I understand about AI, I didn't learn from using it. I learned it from reading about how it works. I learned it from people talking about how it's broken. You know, I wasn't sitting there playing 20 questions with it. I think, I don't know if I brought this up. I was in a professional development thing for our faculty day recently. And one of the faculty members was presenting with one of the instructional designers who I talked to before. And so I knew she had like a grasp on how GPTs work. And he was like, Yeah, you know, and if you tell ChatGPT to keep things confidential in the session, it will keep it confidential. And like me and her just like shot a look at each other because he wasn't talking about the thing in ChatGPT where he used to be able to turn off the learning thing where it would learn from interacting with you. He said, ChatGPT, I'm going to paste my book in here now. Don't copy it. And he just and it would say, OK, and he believed it. And this is a man with a PhD who teaches college students and was giving professional development to other faculty members. He doesn't understand something basic. They're like, this is a lying machine.

Justin:
[24:15] And so, you know, on the one hand, I understand why the term AI literacy is important because it's like this man is illiterate in a way, but it's also like he learned about it from using it and that's not what he should have done.

Kay:
[24:26] Yeah, that's a very interesting way to situate that because in his mind, he is becoming literate in the sense of like he's learning and experiencing learning through that tool. That's just simply learning. I don't think that's literacy. But I think when I've heard AI literacy, especially in the last year or so in like library, professional development stuff, people are really keen on understanding what the tools are, what they can do for patrons and other staff members and that literacy. But then like the idea of like quote-unquote ethics and impacts are not really a part of that literacy like i'm really feeling this frustration with this separate sort of understanding or like this attempt to like categorize literacy and ethics as if the two even if literacy was the thing that they're talking about those two things need to be together and like in fact ethics like Because I think, like, instituting it as ethics or ethical dilemma, like, presupposes that, like, there's a willingness to look at both sides, quote unquote, or multiple sides when, like, these people, like, aren't even accepting sense or criticism and feel very, like, overwrought and, like, get kind of, like, defensive when you bring up a lot of the harms and impacts and stuff. So, yeah, it's really weird how even people who are, yeah, have PhDs or like are professors or like people with some kind of authority are falling for these tools. Yes, Sadie.

Sadie:
[25:55] Oh, I was waiting for you to finish, but OK, well, this is a lot of this is in the IT side of thing, too. I subscribe to a lot of different, particularly computer security newsletters and stuff. And every other article that's in all of these newsletters is about AI. And it's about how tech workers need AI or they predict that AI is going to, if you're proficient in AI, this and that. Which really frustrates me as an IT person because it's like, shouldn't we know better? But then again, it's, you know, Microsoft and Google and all of these companies that offer technical, like free technical certifications and teaching and stuff that are also pushing all of this AI stuff. And it's like, if I can't turn Copilot off, I will be going into the registry to find that. But yeah, so it's a widespread problem in the tech world, too. Which in terms of literacy, as sort of the information with integrity, which you bring up in your paper, which is a really good way of putting it, in my opinion, there's none of that on the IT backside of things, too, when it comes to IT.

Sadie:
[27:11] There's no discussions of... Yeah, like the impacts and the harms. There's no discussions of, you know, what's behind it unless it's to try to push it as a product. So like, yeah, it's certainly everywhere, which is really concerning.

Jay:
[27:25] Yeah this and this kind of reminds me and i'm sure i've talked about this paper on the podcast before i don't remember what it's called i'm sorry it was part of an assignment in my like library school 102 and dr knox was my teacher so if she knows what i'm talking about

Jay:
[27:43] Put it in the comments but it was there's this paper that goes that like argues that like librarians doing infolit specifically like like in academic libraries but i guess anywhere right but like if you're doing a like library session or any kind of information literacy literacy session part of that should be when you're teaching a database or something that you tell the students like this will track you or this has these trackers or like if your browser has this kind of like, you know, anti whatever features or plugins or whatever, it will break the way this database works. So that like the librarian not only has to be literate about all of those things in these tools, but it's part of teaching, like that's what the literacy is. It's not, oh, the students need to know how to use the database and how to use the whatever and let Elsevier track them or whatever it's letting students know that this exists in this tool and they can either choose to turn off all of their stuff and use the database or it have it break on them but like students are at least aware that that's happening and that's an ethical thing right like students are now aware that this is a thing and they're being tracked like their information is being gathered right and the librarian's being honest about that and like I don't know.

Jay:
[29:13] Part of this of like, oh, well, we have to teach students how to use AI. We have to teach patrons how to do it. It's like, I think what's more important is like letting people know where this already exists and what it can do and just letting them be aware of it. And like, that's part of the literacy to me, I think.

Sadie:
[29:33] Well, you just reminded me of, I think it was somebody in our Discord talking about how it was an assignment for their library program where they had to do something or the other or sign up for something or the other. And when they went afterwards to request that their account be deleted or their information be wiped, it was like a nightmare. And it wasn't something that they wanted to sign up to begin with. They only did it because it was required for a specific assignment in library science. And then it took them like a long time to actually be able to confirm that their data with this company was deleted. And it's like, that's, yeah, that's exactly what it is. That's an illiterate approach to any sort of data privacy right there. That is something that, yeah, librarians should be proficient in.

Kay:
[30:21] I just think if we're going to do information science, we should do the information science. We should look at the stuff, see what's going on in the computer. Like, which is it? Are we doing library science or information science? It just makes me feel like I'm in the twilight zone.

Justin:
[30:39] Yeah, I mean, it is kind of like you said, you had to discover the information science side of things because a lot of people treat that as theoretical or as stuff like PhDs do. And librarianship is really pushed by practitioners. And something I wrote down earlier is like, when you talked about the need to like embrace AI, there's a lot of ideology in libraries. Yeah, it's probably a show. But there's also like an insecurity. There's this constant insecurity that librarians will be left behind, and it's been going on for decades.

Jay:
[31:11] Or it's going to take my job.

Justin:
[31:13] I mean, this has been going on from the 90s, where it's like, we have to keep up with, we have to be cybrarians, that term in the 90s.

Sadie:
[31:22] We're going to become irrelevant, which is the thing I have heard so many times, it makes me want to bash my head against something.

Jay:
[31:28] Everyone go watch Desk Set.

Justin:
[31:30] And that's also the thing about you know the the comparison of like you have to get on the ai bandwagon because it'll be like the internet but the internet was implemented over decades yeah through like a lot of infrastructure it's completely different this is like saying everyone needs to get online get on with microsoft word because word is the future and it's like there's open office and stuff and it's it's one software it's not it's not a new infrastructure just

Jay:
[31:57] People trying to push you into emacs and you can do everything in emacs like check your email and

Justin:
[32:02] Tweet your email emac

Kay:
[32:04] I just want to hear from people who are library leaders talking about AI, and we have to get with it, basically, or we're going to be left behind. And you're talking to people who are in library school, recently out of library school, facing a really competitive job market, who are really struggling to figure out, how do I have a full-time job with benefits in a place that I'd like to work? There's already enough to deal with like enough of burnout enough problems and we're adding this on to like it's like i think people should like because when you say when you frame it in that way i think people tend to get defensive and they're like well we have to keep learning new things or whatever and it's like it's not what we're saying like we're just saying that like we shouldn't use the racism machine like i don't want to use that don't.

Jay:
[32:53] Use it yeah like i think part of this and i think i think i talked about this a little bit in our like bib frame must die episode that like great there's such a problem with training and professional development and especially upskilling among librarians this is not a fault of the workers this is a fault of management and library leaders right because like yes things in library science and tech do change and you should keep on top like cataloging you know it's shit changes all the time right like we're always coming up with new ways of describing things i don't know but like there's lots of development in the field and things we have to keep on top of like that's true

Jay:
[33:32] But like, there's such a problem with like, especially in tech services, for example, of people not retiring out of positions, or like, once you get in a position, there's no career path, right? There's no like, okay, I'll stay in this position. And then eventually, I'll get promoted to this position, and then this position, and in this position, is you're kind of just stuck in your position. And if you want something better, you have to leave, and people don't want to leave. And then those people don't upskill because it's not provided to them. And so then you get all these hot, young, fresh library school grads who have all the new hotness and know everything, who are trained in RDA, and that's the only thing they're trained in, and they understand Ferber and Wemmy and all this shit, and they're fresh, and they know these things, and then they're not getting hired. Because the people who aren't upskilled like aren't leaving those jobs so those jobs aren't available and they're like it's this whole cycle of like yeah like we should keep on top of things but the people who know it aren't getting hired and then the people who don't know it their management isn't upskilling them and training them in order to keep them abreast of things so that we don't have to use fucking ai right like we can just be trained in other things and have skills. I think way more librarians of all ilk should have at least some sort of skill or literacy around basic coding or just any kind of IT skills.

Jay:
[35:01] Because you're surprised, you'll be surprised like how often it comes in handy. But like, because I'm the only person who knows anything about it in my department, suddenly I'm the person, right?

Kay:
[35:13] Oh, yeah.

Jay:
[35:14] Right? Like, what if more librarians like took like a Python course, you know, like if that was provided in library school or in training at all, like that kind of upscaling, it's just not happening.

Kay:
[35:26] Yeah.

Jay:
[35:27] Rant over.

Sadie:
[35:27] I would say it would be better even to not stick it to a particular language, but just a programic thinking course, because there are a lot of parallels between library work and that sort of thinking too. So like, yeah, Python course, but if you are just, you just are memorizing the syntax, it doesn't help with the critical, like, this is how it works. So therefore, I can do this and that, which actually is a lot more transferable to other coding systems.

Kay:
[36:02] So totally, I think just knowing how a computer works, like just, just really like, what is the hardware of this thing? What is RAM? Understanding these things, I think, will really go a long way. Honestly, even public services. I mean, I've worked in public libraries for my entire library career thus far. And I do a lot of one-on-one tech help with patrons. And a lot of people I've worked with across the big city system, even the affluent suburban library that I work in currently, a lot of the staff don't even know how to use the computer. It's just like you know and then like i am having this current thing where like not anywhere for me to move up in my current library because everybody is like established and like so i'm on the job market basically hire k you're.

Jay:
[36:49] An idiot if you don't

Kay:
[36:50] If you're in chicago please hire me but yeah i just i think so important for library workers to like like i know coding is like this intimidating thing but like understanding at least like how do you access the terminal on your computer like know what applications are what are file formats like i love talking to patrons about file formats it's it's good stuff you don't have to know about gams but i mean i would love it if you did but you don't have to go with all that if you don't want to i.

Jay:
[37:17] Took a library just academy of course and the fucking file types were insane i was like pick

Kay:
[37:24] One it was like insane.

Justin:
[37:29] Anyway, on jumping back to the article, there are three, there's three areas of critique that you focus around. I wanted to, I wanted to like kind of get at why these three. So you says there's a reinforcement of algorithmic bias. So racism and hate speech, there's data collection practices and a prolific lack of concern for user privacy and the environmental impact. Since this is mostly like a persuasive sort of article, was that the top three that you felt were the most impactful for people? Did you do any research on what changes people's minds about AI, makes them more skeptical?

Kay:
[38:09] That's a great question. I think this is coming from my own experiences of studying AI. And I will say I use the three of these things in really broad categories. The technology has also changed quite a bit in the last two or three years. And reporting as sort of altered depending on companies. But a lot of these things have stayed the same in that there is no transparency at all from the tech companies about what these things are meant to do, what the algorithms are meant to accomplish at all. So I think in terms of, when I think about the reinforcement of racism and hate speech, the first thing that comes to my mind is like facial recognition. But I also, and like the enforcement of that, and law enforcement, surveillance. But I think I tended to focus more on like text and chat FGPT sort of type of things in the article just to like have some kind of focus. But there are so many possibilities for misrepresentation of people, history, context, lived experience, just willful misrepresentation. And in fact, intentionally so. And just like, I think I think I mentioned in the article, I don't remember if I do or not, but ChatGPT at least used to be able to say that it would not like produce hate speech or that we don't, we're not gonna do this for you. And then like, it's so easy to break that or so easy to just like do a couple of commands and unlock that from program. But I think that was ChatGPT.

Kay:
[39:30] I don't know. So that is definitely like one broad category. Also, I think in that part, I talk about Tim McEbru sort of being like, hey, you guys don't care about black women or anybody who isn't white. Google is like, we don't care. Bye. So that was like sort of the social discourse surrounding that as well.

Kay:
[39:50] Data collection. I mean, yes, like it's web scraping is such a thing that is at least at the time, OpenAI was very open about scraping the web. I mean, this is pre a lot of the lawsuits, a lot of the copyright issues that were going on, which I, in the article, I don't take a stand because I am against private property as a concept. But I also like, to this point, I recommend Astra Taylor's book called The People's Platform. She has a good chapter on copyright. It talks about a lot of these issues that I think if I were to go back and add that citation, I would do that there. Just talks about the sort of different arguments of like, we know copyright is the way to control people's likeness, intellectual property, et cetera, and really creates barriers to access. It also in cases ensures that people get paid for their work and that's complicated. So because of that, I'm like, I'm not gonna claim here or somebody else can't if they want to. But in terms of user privacy, yeah, it's like, I mean, I think part of information literacy for me and for like library workers is like, like Jay was saying earlier, people understanding the systems at work behind the technology. So like the feds can very easily like get access to information, things that you put into ChatGPT I believe law enforcement can access to a degree if there's a warrant. Same thing with Discord. Same thing with the Meta platform, especially pushing it in the last week.

Kay:
[41:14] So, you know, be aware of those things is sort of my take on that. And when it comes to environmental impact, I mean, there is a lot of reporting about the sort of water usage and electrical usage of AI definitely affecting people's communities. Most particularly, if it comes to mind, is what's happening in Memphis right now with XAI blowing methane gas out of these plants and poisoning everybody around the area. So that's sort of impact that hadn't happened when I wrote the paper. But I want to bring up this whole anecdote about I went to a webinar last week that was about this new library book, Generative AI, something about whatever. It's use in a library or something to that degree.

Kay:
[41:57] And I asked a question in the chat about like, they didn't talk about ethics or impacts. And they said in the presentation, like, you know, we didn't talk about this, because we didn't want to get into all of the essentially the messiness of it. And I was like, okay, at least you're saying that, but like, we'd love to see more, obviously. So, but someone in the, one of the authors made this claim that like being critical of AI was somehow reinforcing the traditions of librarianship because it means that we don't move forward or something or innovate. And I was like, that's not what refusal means.

Kay:
[42:28] That's not right. And so I cited, you know, like environmental impacts and the outsourcing of labor, exploitation, etc. And then the other author was like, well, wait till you hear about the environmental impacts of like cultivating beef. And I was like, oh, is that really how we're going to approach this argument right now? Like, so I think there's more to be said to deconstruct those kind of arguments. But yeah, those are sort of my main three areas of critique in the article that I hope to expand upon in the future. Definitely looking at like for future projects, current projects, thinking about data centers, their impact on local communities. You know, in Chicago, we have data centers being built here that have raised our electrical costs 10%. No one consented to that at all. And so that's besides the environmental impacts, at least just like the immediate utility costs, raising prices for residents, for businesses, schools, any place that uses electricity, which is everywhere. So that's important to think about. And also thinking about, as I've been saying, like the data workers themselves who are actually training the information. And so I think kind of environmental also like in the sense of like nature as well as like just like labor environment, like the environment of people in society.

Justin:
[43:37] If the AI booster says they got beef, tell them I'm a vegetarian and I ain't fucking scared of him.

Sadie:
[43:44] Can I please get that as a drop just so I can have it personally?

Kay:
[43:48] It has to be a drop. It has to be a drop. It's my new ringtone. I thought you were holding that in too.

Justin:
[43:56] Uh-huh. Yeah.

Jay:
[43:57] Oh, he had that one ready to go.

Justin:
[44:00] Every time, just sitting there vibrating at frequencies that you can't see. What does a politics of refusal look like in practice? Like if we are saying that this is a refusal of AI, what does that mean we are facilitating in the meantime? Because there's a thing of like, it's acceptable to affix tech solutions to social problems rather than to make space for social solutions. So if we are refusal of this tech solution, what are we trying to make space for in social solutions? Or is that the wrong track?

Kay:
[44:36] I think the problems that I think AI is like, people think that AI is trying to solve is like burnout and like accessibility. Things that, accommodations can be made in the workplace. People can make the choice to change how they conduct themselves.

Kay:
[44:52] So I think agency is a big part of that. But I think, obviously, like, you know, saying the technology, it's important that we understand to that, like, I think we all do, obviously, but just like the everyone has a different context and material condition in which they're working in. And like, not everybody is going to get access to like that vendor contract discussion. So I think if you feel confident enough at work to openly say that you don't want to use technology and that you that particular technology and that you value, you know, your human labor that you're getting paid to do. That's, I think, the first thing. I think Emily Bender and Alice Hannah in their book talk about the importance of understanding what are the actual outputs meant to be a technology and asking questions of the people who are trying to put AI in the workplace. Like, what is this really meant to accomplish?

Kay:
[45:37] Are there ways that we can actually step in and say, like, you know, what if we changed our method of management or administration of particular tool or, you know, in the workplace? I think, at least as somebody who's doing like scholarship, I think like public scholarship is really important, making information readily available and accessible to people. So I was really glad this issue is open access, just so that way people can actually learn about this and be able to share it with people. I think, you know, there is a lot of space for TEEK. And it is kind of a hard situation sometimes. Like, I know sometimes I can feel very, not like afraid, but just like, I'm getting into a capital S situation when I am faced with somebody who is like positive about AI. And I have to say like, hey, I don't agree with this. So I think it's... You know, having agency and saying, hey, I don't like this. And that's okay. That's okay to not like it. Also, unionize. If you can.

Jay:
[46:30] Yeah, I was about to say, it's like, I'm about to grab my microphone so tenderly and like, listener, listener, I've got my arm around your shoulder, like, hey, buddy, how you doing? How's your day? Have you unionized your workplace yet? Have you put a tech clause in your collective bargaining agreement yet? You can do this. You can refuse through unionizing. You can do it, I promise.

Kay:
[46:48] Yeah. And if there's like, in your workplace, If there is a situation where you may or may not get fired for trying to organize, if there's a risk there, I say to, you know, talk to your peers. Like, I don't like be socially engaged. Like, say, like, hey, here are some resources. I'm just thinking this is kind of weird. Trying to have conversations with folks, I think, is really, really important. If you can't necessarily get into, like, a proper bargaining agreement, but try if you can. Yeah.

Jay:
[47:18] In North Carolina, they do meet and confer because you can't have collective bargaining in public service in North Carolina and in a lot of the South. Meet and confer is something you can absolutely do and it works. Also, if people are afraid of organizing, or all organizing, no matter how big or small, is literally just about one-on-ones. That is the core of what organizing is, is can you talk to another person? If you can't, learn. I'm tired of people going,

Sadie:
[47:46] I don't know how to talk to people.

Jay:
[47:47] Learn. you can I promise

Kay:
[47:48] We talk to people all the time like I'm so sorry.

Sadie:
[47:54] Oh, my God. So bad at that.

Kay:
[47:57] Everyone has their own capacity, too. Sometimes I can get too air sign with it where I'm like, everyone's valid. But I'm trying to be like...

Jay:
[48:06] Or come in like, no, you're not.

Kay:
[48:07] And that's okay. So just for, you know, I have been in substance circles, substance use circles, and the idea of meetings are just sort of like you and somebody else in the room. It's really the same kind of concept of just because there's two of you doesn't mean that there's no other group involved that you can't just have a discussion and talk and try to find community online that there's nobody physically near you. There are plenty of people who are very open about being critical of this technology. Yeah. So I think there are definitely ways, at least in terms of collective organizing. I will say, too, there are a lot of data workers' organizations that are specific to resisting exploitation, especially the Tech Workers Coalition, as well as the Data Labelers. I think it's an issue. Let me find the link. But those are people who are data workers who are actually doing content moderation and annotation and who are being affected by AI in a very real physical material way.

Jay:
[49:10] Are those the people in like kenya who unionized or yeah yeah i remember when that happened that was dope and this has been k&j's union corner yes

Sadie:
[49:20] There was there was another book that you recommended a couple of minutes ago i think it was a book but i didn't quite catch the title it wasn't the ai con yeah maybe i'll just have to go and actually listen to the episode no if it's gone i was just if

Kay:
[49:35] It's not it's data cartels by sarah landon that's what it's probably Shouts.

Jay:
[49:39] At Sarah Lambden, friend of the pod. We know you're listening, Sarah. Hi.

Sadie:
[49:46] We hope you're listening, Sarah.

Kay:
[49:47] Thank you so much. Yeah.

Jay:
[49:49] You're so cool. Anyway.

Justin:
[49:51] Yeah, I, one of the, you mentioned access to information that has integrity. There's something I took a note of. I can't remember which section of the paper that was in, though. It was closer towards the end, I think. But it was one of the ways in which we can talk about the value of librarianship in response to AI, because the information that you get out of an AI is non-repeatable, and it's non-reversible. So you can ask it, who's Tom Cruise's mother? But if you type in Tom Cruise's mother's name, it's not going to... If you say like, who is the son of Tom Cruise's mother's name, it might not give you the answer because it's not a database. So you can't do like back and forth searching and then re-retrieving information because it's not structured in any way. I'm curious how we like talk about information integrity, because I feel like in the current climate, it's a very difficult subject matter to get people to care about because there's this sort of nihilistic approach to information.

Kay:
[50:50] I empathize with this as somebody who did a communication degree and learning about Overton Window, learning about framing, learning about different kinds of ways that speech is manipulated and or speech is framed in very particular ways to reach a certain end. It's also just generally in business, societal kind of context or politics, civic engagement, et cetera. Things are trying to reach a certain action or certain end. and like it's hard and I really understand like a feeling of nihilism because I do struggle a lot with this like sense of like okay what is even like real quote-unquote information I'm not super educated on information literacy like to the same degrees like a lot of my peers I think I just didn't study it in library school but I think I do it work um.

Jay:
[51:37] Probably better off to be honest

Kay:
[51:38] Yeah like I don't know I found that framework and I was like okay that's that's true that's something, yeah what i take away from it is like the authority point like the authority is constructed and like this is something that is contextual and so i think for me two things i think about is like however i think about information it's like okay what end is this information trying to to reach if you're thinking about i don't know someone trying to give some kind of fact to you it's like i don't know some kind of statistical fact like say some politician or whatever doing that and you're like i think that's wrong like great act on that impulse also just try to look up more information about like what that thing is i think that's kind of an obvious point but just like you know taking the step to critically comprehend what people are saying also thinking about integrity i.

Kay:
[52:29] Think also when i was writing it i was thinking a lot about like file integrity and like the literal like metadata of files and just like yeah yeah is that like it's like the technical metadata is like that helps to like construct what this thing is so that's so that's like where i feel like comfortable speaking on i think it's definitely one of those like i have to wrap up this paper this is a solution that i'm thinking about but yeah i really like the idea of being able to track where information is coming from and understanding what is this meant to serve who is saying this what is their context why are they saying this to me in this particular moment like what is the goal here like we were all saying before about you know like understanding what the vendor's goal is meant to be like surely to some degree they are trying to provide a service but that service is going to come at the expense of our money right so like what does that really impact what does that really mean for us and what is the like power dynamic there so thinking about power and like the role of that as well as well as possibilities for framing and like an intent to a lot of things are going to intend to mislead people.

Kay:
[53:41] And also increase engagement, as we've seen, obviously, in the last many years of just being on social media. A lot of information is just meant to animate you or meant to excite you in a way that gets you pissed off and gets you feeling a sense of like, you want to be able to create content that then makes more money for these platforms. Yeah, that's my little social media study soapbox.

Sadie:
[54:05] On the topic of integrity, I'm throwing this out here because this is one of the parallels that I've seen for a long time is in cybersecurity, there's the CIA principle, which is confidentiality, integrity, and availability, and how you have to balance those three things when you are basically doing a risk assessment. And information is basically the same, right? You want to think about the confidentiality of your information, what the availability of it is, and the integrity. And yeah, like the integrity part, it's dropped a lot, I feel. So I'm just throwing that out there because it's one of those things that I think about all of the time in relation to a bunch of different things. And I wanted to get it into the show notes. So there you go.

Kay:
[54:45] That's a really helpful resource. I'm thinking also too about like subjectivity in information and like the importance of like thinking critically about like the person themselves like conveying information to you um and where they might stand with an institution or a certain infrastructure um and how that frames speech i mean i think about this a lot with like going back to like library leaders and ai it's like like i think a lot of people are feeling a sense of like if i don't parrot this talking point about ai then like i might not get a job or i might not like be accepted like i think it's really about belong like i think it's about Because when I talk about this with people or when I think I'm social media or whatever, sometimes I feel like there is a sense of like, well, the cool kids don't like AI and like, I'm not a cool kid. And that makes me feel bad about myself. And it's like, okay, like, we're adults. Like, I don't know why this is happening. Like, yeah. So I just sometimes I think there's a lot of ego and like emotion affect involved in these discussions that I wish more library people were talking about. But yeah, at least in scholarship, I think they do, at least in the in the words and the in the books and such.

Justin:
[55:59] Yeah, there's a lot of signaling that has to happen, which is that you don't necessarily need to believe something, but you sign on for the beliefs. I mean, it applies to almost like any kind of social situation. So you say certain things in order to show that you are in some kind of in-group. So, yeah, that's, I think, definitely among boosters, it's definitely like, I'm with it, I'm with this group, I'm with the people who are making the money, who are doing the stuff, who are changing the world, even if they don't, like, understand AI in any way, or, you know, they sign up for those beliefs, and will, even if they don't believe themselves entirely, or think about them, they have a belief about those beliefs that they're good things to believe. Well, anyway, I think we've covered everything. Is there anything that we missed?

Kay:
[56:43] I could talk about the junior fellows program a little bit.

Justin:
[56:46] Oh, yeah. In case other people were interested in doing that. Yeah.

Kay:
[56:49] I mean, professional development wise, I think like that was a really great experience. Folks are looking for paid internships that have remote options. I really recommend it. However, you do have to become a federal employee, at least temporarily. So that's just sort of the barrier to that. So besides that, you get access to understanding a part of the library, which is pretty cool. I worked with the web archiving section, and everybody there was really great. And I got to work with the Mass Communications Web Archive, which is really fun. I got to actually essentially do cataloging, which was really impactful, and just help people understand within a certain subject matter how to organize more files and records and stuff. Even though it was like three months, I felt like it was impactful. So I recommend that to people I think you have to be coming out of school. It could be undergrad or grad. I actually was one of the few people who were out of grad. There were most people who were out of undergrad and going into library school or thinking about library school, which is very great. So yeah, it's paid depending on where you are. So I recommend that. ALA has some things, certainly, but I recommend, at least if you want something a little more concentrated, project-focused work experience, I recommend the Junior Fellows Program.

Jay:
[58:06] I'm glad they have remote options now. I thought about doing it when I was going out of undergrad before I went into library school, but there weren't remote options at the time. And so it was like, I'm going to live in Washington, D.C. for three months. And I was like, I can't do that, dog um but so yeah i wanted to do it it's really cool you got to and that they do remote options now

Kay:
[58:26] I was grateful to be able to do it and to also be able to like take a leave for my job like that was really impactful yeah i wasn't able to do that i wouldn't have and also like i live with my partner like we split costs like so there are ways that like it worked for me i would say certainly probably fits better for folks who are like yeah earlier and maybe their career journey who can take a couple months off from a job or just start working at the Library of Congress, I guess. It's a great recruitment program, basically. It's a good way to kind of get people in the door. But yeah, DC is not the moment for me. I'm at the moment, at least. But yeah.

Justin:
[59:01] All right. I'm going to put the article in the notes and everything that we mentioned, all the books. Do you want to plug anything? People can find you or anything like that.

Kay:
[59:11] I am on bluesky at K, the letter K, and then S-L-A-T-E-R dot bsky dot social. It's the main place you can find me that I talk about library stuff. I'm also, I've worked with Library Freedom Project on the AI and Library Survey, and we're doing a lot of work with that. At some point right now we're taking the survey in and we're looking at results and doing all the fancy coding and stuff so that's cool yeah we're kind of a lot of current projects i have a lot of applications in for things so i'm sort of incubating but trying to do more work specific about data centers and data workers and connecting that to information studies yeah.

Jay:
[59:46] Also hire k yeah please okay

Kay:
[59:48] I'm in the chicagoland area gocom would be cool i'd work in a makerspace right now, but it's not forever for me. But yes, GoCom Archives, A+.

Justin:
[59:59] Nice. All right. Well, thanks for coming back for a third time.

Kay:
[1:00:02] Yeah, thanks for having me. I'm so happy to see your faces.

Justin:
[1:00:06] Yeah. Good night.

Editor is loading...