1 00:00:07,679 --> 00:00:10,719 Hello, and welcome to the Physics World weekly 2 00:00:10,719 --> 00:00:12,740 podcast. I'm Hamish Johnston. 3 00:00:13,205 --> 00:00:15,764 It's the September 18, and we're in the 4 00:00:15,764 --> 00:00:17,625 middle of peer review week. 5 00:00:18,005 --> 00:00:20,344 And this year's theme is rethinking 6 00:00:20,804 --> 00:00:23,385 peer review in the AI era. 7 00:00:24,005 --> 00:00:27,064 My guest in this episode is Laura Fiethom 8 00:00:27,204 --> 00:00:28,699 Walker, who is reviewer 9 00:00:29,079 --> 00:00:32,140 engagement manager at IOP Publishing. 10 00:00:32,920 --> 00:00:35,179 As well as publishing Physics World, 11 00:00:35,559 --> 00:00:36,059 IOP 12 00:00:36,359 --> 00:00:38,219 produces over 100 13 00:00:38,280 --> 00:00:39,500 scholarly journals, 14 00:00:39,879 --> 00:00:42,784 and it's just reported the results of a 15 00:00:43,164 --> 00:00:43,824 new worldwide 16 00:00:44,204 --> 00:00:46,304 survey of reviewers' attitudes 17 00:00:46,765 --> 00:00:49,184 to the use of artificial intelligence 18 00:00:49,804 --> 00:00:51,424 in the peer review process. 19 00:00:52,284 --> 00:00:55,585 That report is called AI and peer review 20 00:00:56,039 --> 00:00:57,179 2025, 21 00:00:57,320 --> 00:00:59,899 and Laura is the lead author. 22 00:01:00,359 --> 00:01:01,899 Here's our conversation. 23 00:01:10,825 --> 00:01:12,685 Hi, Laura. Welcome to the podcast. 24 00:01:13,864 --> 00:01:16,045 Thanks, Hamish. It's great to be here. 25 00:01:16,344 --> 00:01:19,245 So, Laura, before we talk about, this survey, 26 00:01:20,025 --> 00:01:20,984 can you just give us, 27 00:01:22,105 --> 00:01:24,204 an idea of what IOP Publishing's 28 00:01:24,584 --> 00:01:27,659 current policy is on the use of artificial 29 00:01:27,799 --> 00:01:28,299 intelligence 30 00:01:28,680 --> 00:01:30,700 in the peer review process? 31 00:01:31,159 --> 00:01:33,560 And is is is the policy in line 32 00:01:33,560 --> 00:01:35,659 with most other scholarly publishers? 33 00:01:37,480 --> 00:01:39,640 Yeah. So it's a good question. Our current 34 00:01:39,640 --> 00:01:40,140 policy 35 00:01:40,439 --> 00:01:42,694 is prohibitive. So we 36 00:01:43,155 --> 00:01:46,034 do not accept or condone the use of, 37 00:01:46,435 --> 00:01:48,215 large language models to 38 00:01:48,674 --> 00:01:51,075 write peer review reports at all or to 39 00:01:51,075 --> 00:01:52,375 edit peer review reports. 40 00:01:53,635 --> 00:01:54,135 And 41 00:01:54,515 --> 00:01:56,454 that's because we have a lot of concerns, 42 00:01:57,530 --> 00:02:01,530 ethical concerns about uploading confidential manuscripts into these 43 00:02:01,530 --> 00:02:03,230 AI chatbots. We don't know, 44 00:02:04,010 --> 00:02:05,950 where that information is being used. 45 00:02:06,250 --> 00:02:08,569 It's also because, as we'll talk about in 46 00:02:08,569 --> 00:02:09,150 a moment, 47 00:02:09,675 --> 00:02:12,555 views of the scientific community are actually really 48 00:02:12,555 --> 00:02:13,055 polarized, 49 00:02:13,594 --> 00:02:15,134 and there's a large proportion 50 00:02:15,435 --> 00:02:16,175 of researchers 51 00:02:16,634 --> 00:02:17,935 who do not feel comfortable 52 00:02:18,555 --> 00:02:21,534 with AI being used to review their manuscripts 53 00:02:21,754 --> 00:02:22,974 in any way. 54 00:02:24,639 --> 00:02:26,319 And so we went with, 55 00:02:26,719 --> 00:02:27,459 a prohibitive 56 00:02:28,080 --> 00:02:28,580 policy. 57 00:02:29,199 --> 00:02:31,039 There are lots of publishers who have the 58 00:02:31,039 --> 00:02:31,780 same policy, 59 00:02:32,479 --> 00:02:34,719 but, actually, when you look across the industry, 60 00:02:34,719 --> 00:02:37,414 there's a an enormous amount of variation 61 00:02:37,715 --> 00:02:39,814 in what different publishers are mandating. 62 00:02:40,675 --> 00:02:43,495 Another common approach is to say that, 63 00:02:44,114 --> 00:02:46,455 some publishers accept the use of, 64 00:02:47,235 --> 00:02:50,240 generative AI to kind of do light editing, 65 00:02:50,300 --> 00:02:53,099 language editing, grammar editing, things like that, but 66 00:02:53,099 --> 00:02:55,680 they emphasize that the reviewer is 67 00:02:56,459 --> 00:02:58,959 ultimately responsible for the content of that review. 68 00:02:59,659 --> 00:03:01,900 Obviously, it's very difficult to police that. It's 69 00:03:01,900 --> 00:03:02,959 difficult to police 70 00:03:03,395 --> 00:03:04,935 the extent to which, 71 00:03:05,555 --> 00:03:07,474 a reviewer might have used a large language 72 00:03:07,474 --> 00:03:07,974 model. 73 00:03:09,235 --> 00:03:10,775 And there's another another 74 00:03:11,634 --> 00:03:12,935 large proportion of, 75 00:03:13,555 --> 00:03:15,735 publishers whose policy is that 76 00:03:16,995 --> 00:03:18,455 reviewers can use 77 00:03:19,110 --> 00:03:20,250 large language models 78 00:03:21,189 --> 00:03:23,270 to write or edit their reviews, but they 79 00:03:23,270 --> 00:03:23,930 have to, 80 00:03:24,790 --> 00:03:26,169 be honest with 81 00:03:26,469 --> 00:03:26,969 the 82 00:03:27,349 --> 00:03:29,270 publisher and with the authors about how they 83 00:03:29,270 --> 00:03:30,705 were used. So they have to 84 00:03:31,905 --> 00:03:33,745 usually tick a box or make a statement 85 00:03:33,745 --> 00:03:35,685 to say to say how they were used. 86 00:03:36,465 --> 00:03:37,125 I think 87 00:03:37,425 --> 00:03:39,985 what we really need as an industry is 88 00:03:39,985 --> 00:03:42,085 some kind of harmonization of these policies 89 00:03:42,705 --> 00:03:46,129 because, you know, often reviewers aren't aware of 90 00:03:46,129 --> 00:03:48,770 the specific policy of the publisher that they're 91 00:03:48,770 --> 00:03:49,830 reviewing for. And, 92 00:03:50,530 --> 00:03:52,129 there are so many publishers, they might be 93 00:03:52,129 --> 00:03:53,969 getting lots of lot lots and lots of 94 00:03:53,969 --> 00:03:55,590 different requests for different journals. 95 00:03:56,284 --> 00:03:58,125 I think we need to work together a 96 00:03:58,125 --> 00:03:59,485 bit more so that we're all on the 97 00:03:59,485 --> 00:04:01,325 same page, and that will help reviewers, and 98 00:04:01,325 --> 00:04:02,625 it will also help authors. 99 00:04:03,564 --> 00:04:05,665 And, I mean, it sounds to me that 100 00:04:06,365 --> 00:04:09,245 surveys are really needed, aren't they? Because, 101 00:04:09,965 --> 00:04:11,185 you know, as you said, 102 00:04:12,340 --> 00:04:12,840 reviewers 103 00:04:13,139 --> 00:04:13,879 are polarized. 104 00:04:14,500 --> 00:04:16,980 I'm guessing that authors are polarized as well 105 00:04:16,980 --> 00:04:19,000 in terms of whether they want their, 106 00:04:19,860 --> 00:04:21,160 papers to be reviewed, 107 00:04:22,259 --> 00:04:23,240 using AI. 108 00:04:24,100 --> 00:04:24,600 So 109 00:04:25,305 --> 00:04:28,045 with this survey that, that you've just concluded, 110 00:04:29,225 --> 00:04:30,764 who did you speak to? 111 00:04:31,384 --> 00:04:32,444 How many respondents 112 00:04:32,985 --> 00:04:35,545 did you have, and what kind of questions 113 00:04:35,545 --> 00:04:36,444 did you ask? 114 00:04:37,384 --> 00:04:39,384 Sure. So we put this out to our 115 00:04:39,384 --> 00:04:42,279 peer review community. So that's people who have, 116 00:04:42,660 --> 00:04:44,580 been invited to review for us or reviewed 117 00:04:44,580 --> 00:04:45,639 for us in the past. 118 00:04:46,339 --> 00:04:49,300 And we got about 350 119 00:04:49,300 --> 00:04:49,800 responses. 120 00:04:50,259 --> 00:04:52,680 It was a really diverse group of respondents, 121 00:04:52,900 --> 00:04:55,460 really geographically diverse, a good mix of gender, 122 00:04:55,460 --> 00:04:55,964 a good mix 123 00:04:57,725 --> 00:05:00,044 of, career levels as well and subject areas. 124 00:05:00,044 --> 00:05:02,225 So we were quite pleased with the diversity 125 00:05:02,845 --> 00:05:04,384 of the responses we got. 126 00:05:04,845 --> 00:05:05,345 And 127 00:05:05,884 --> 00:05:08,604 it was quite recently that we did another 128 00:05:08,604 --> 00:05:10,685 survey. Essentially, in 2024, 129 00:05:10,685 --> 00:05:12,300 we did the state of peer review report, 130 00:05:12,300 --> 00:05:13,120 which asked 131 00:05:13,899 --> 00:05:16,879 much broader questions of the peer review community, 132 00:05:17,019 --> 00:05:18,779 and one of the questions we asked was 133 00:05:18,779 --> 00:05:20,000 about the use of AI. 134 00:05:21,339 --> 00:05:24,000 And this is such a fast moving field 135 00:05:24,060 --> 00:05:25,579 that we really felt the need to go 136 00:05:25,579 --> 00:05:27,199 out and ask more detailed question 137 00:05:27,555 --> 00:05:29,555 questions about this a year later. And what 138 00:05:29,555 --> 00:05:31,415 we have found is that things have shifted 139 00:05:31,875 --> 00:05:34,295 even in that quite short space of time. 140 00:05:34,675 --> 00:05:37,254 And I think we, as publishers, need to 141 00:05:37,314 --> 00:05:39,495 understand what our communities are feeling 142 00:05:39,795 --> 00:05:40,855 and to kind of 143 00:05:41,479 --> 00:05:43,959 stick with them throughout this process and adapt 144 00:05:43,959 --> 00:05:45,180 our policies accordingly. 145 00:05:45,560 --> 00:05:46,060 So, 146 00:05:46,919 --> 00:05:48,439 so we we asked a whole range of 147 00:05:48,439 --> 00:05:50,279 questions of these 350 148 00:05:50,279 --> 00:05:50,779 people. 149 00:05:51,800 --> 00:05:54,294 One of the the main insights was we 150 00:05:54,294 --> 00:05:56,055 asked them the same question we'd asked them 151 00:05:56,055 --> 00:05:57,095 in 2024, 152 00:05:57,095 --> 00:05:57,834 which was, 153 00:05:59,095 --> 00:06:01,514 what do you think the impact of generative 154 00:06:01,814 --> 00:06:03,914 AI will be on peer review? 155 00:06:04,454 --> 00:06:07,019 And we did see a shift. So far 156 00:06:07,019 --> 00:06:09,360 fewer people in 2025 157 00:06:09,899 --> 00:06:10,720 were neutral. 158 00:06:11,740 --> 00:06:13,279 In the 2024 159 00:06:13,419 --> 00:06:14,939 survey, 36% 160 00:06:14,939 --> 00:06:17,500 of people said, well, I don't think it's 161 00:06:17,500 --> 00:06:18,735 gonna have much of an impact. 162 00:06:19,854 --> 00:06:22,435 That had reduced to 22% 163 00:06:22,495 --> 00:06:23,875 in 2025. 164 00:06:24,735 --> 00:06:25,935 So a 14% 165 00:06:25,935 --> 00:06:26,914 reduction, and 166 00:06:27,214 --> 00:06:27,694 those, 167 00:06:28,094 --> 00:06:30,334 respondents had kind of gone to the either 168 00:06:30,334 --> 00:06:31,314 end of the spectrum. 169 00:06:31,615 --> 00:06:34,274 So there was a 2% increase in respondents 170 00:06:34,350 --> 00:06:35,330 who thought that, 171 00:06:36,189 --> 00:06:38,509 AI would have a negative impact overall, but 172 00:06:38,509 --> 00:06:41,709 a 12% increase in respondents who said that 173 00:06:41,709 --> 00:06:43,870 they thought AI would have a positive impact 174 00:06:43,870 --> 00:06:44,370 overall. 175 00:06:44,990 --> 00:06:46,670 And it's important to bear in mind this 176 00:06:46,670 --> 00:06:47,444 is a different, 177 00:06:48,165 --> 00:06:50,404 sample. It's a different population of people, and 178 00:06:50,404 --> 00:06:52,964 it's a smaller sample size. So it might 179 00:06:52,964 --> 00:06:54,324 just be that we we kind of got 180 00:06:54,324 --> 00:06:55,625 a different mix of people. 181 00:06:56,725 --> 00:06:58,585 But I do have a feeling that 182 00:06:59,444 --> 00:07:02,085 things are becoming more polarized, and it might 183 00:07:02,085 --> 00:07:04,710 be a a case of more people are 184 00:07:04,710 --> 00:07:05,210 aware 185 00:07:05,589 --> 00:07:08,649 of these tools now in the last year. 186 00:07:08,949 --> 00:07:09,449 And, 187 00:07:11,589 --> 00:07:13,430 you know, they've thought a bit more carefully 188 00:07:13,430 --> 00:07:15,029 about the impact that it might have on 189 00:07:15,029 --> 00:07:15,770 peer review. 190 00:07:17,314 --> 00:07:19,574 I see. And, I mean, I suppose, 191 00:07:20,435 --> 00:07:22,375 it it does make sense, really, 192 00:07:23,074 --> 00:07:24,694 that this polarization 193 00:07:24,995 --> 00:07:25,735 will increase 194 00:07:26,354 --> 00:07:29,014 maybe simply because people are becoming more aware, 195 00:07:29,235 --> 00:07:30,214 as you say, 196 00:07:31,560 --> 00:07:33,800 you know, because I I suppose AI has 197 00:07:33,800 --> 00:07:34,699 become normalized 198 00:07:35,000 --> 00:07:37,180 almost in in everyday life, 199 00:07:38,120 --> 00:07:39,879 for a lot of people. Do do do 200 00:07:39,879 --> 00:07:42,279 you think it's it's that and maybe a 201 00:07:42,279 --> 00:07:44,759 combination of of the fact that people are 202 00:07:44,759 --> 00:07:48,154 be have used AI, and so they know 203 00:07:48,154 --> 00:07:50,235 what it's capable of doing and that they 204 00:07:50,235 --> 00:07:52,095 sort of project that on 205 00:07:52,475 --> 00:07:54,235 on how they could use it in a 206 00:07:54,235 --> 00:07:56,955 review or how they wouldn't want it to 207 00:07:56,955 --> 00:07:58,175 be used in a review. 208 00:07:59,100 --> 00:08:01,259 It's a really good question. The question of 209 00:08:01,259 --> 00:08:03,680 to what extent are the research community 210 00:08:03,980 --> 00:08:05,360 truly aware of 211 00:08:05,980 --> 00:08:09,180 what these large language models are capable of 212 00:08:09,180 --> 00:08:11,020 and what is a maybe a good use 213 00:08:11,020 --> 00:08:13,125 for them in the peer review process and 214 00:08:13,125 --> 00:08:15,384 where they shouldn't be used. We asked, 215 00:08:16,085 --> 00:08:19,045 lots of free text questions, and we analyzed 216 00:08:19,045 --> 00:08:21,625 those responses, which were really interesting. 217 00:08:22,324 --> 00:08:24,805 And, again, they were very polarized. There were 218 00:08:24,805 --> 00:08:27,000 a lot of people who raised serious ethical 219 00:08:27,160 --> 00:08:27,660 concerns. 220 00:08:28,040 --> 00:08:29,560 But there were a lot of people who 221 00:08:29,560 --> 00:08:30,060 said, 222 00:08:31,480 --> 00:08:33,559 well, you know, I use large language models 223 00:08:33,559 --> 00:08:35,320 all the time, and, actually, I use them 224 00:08:35,320 --> 00:08:38,620 for analysis, and I use them to 225 00:08:39,000 --> 00:08:40,940 analyze the manuscript under review. 226 00:08:42,024 --> 00:08:46,024 And, really, when you understand how LLMs work 227 00:08:46,024 --> 00:08:47,325 kind of under the hood, 228 00:08:47,945 --> 00:08:51,245 they really shouldn't be used for scientific analysis 229 00:08:51,304 --> 00:08:53,545 and critique. It's not something that they're capable 230 00:08:53,545 --> 00:08:54,045 of. 231 00:08:54,509 --> 00:08:57,710 What large language models do is they predict 232 00:08:57,710 --> 00:08:59,790 what is the most likely next word, and 233 00:08:59,790 --> 00:09:02,350 they're capable of producing text which is very 234 00:09:02,350 --> 00:09:02,850 convincing 235 00:09:03,470 --> 00:09:06,750 but not necessarily very accurate. And that's something 236 00:09:06,750 --> 00:09:08,205 we see a lot when we look at 237 00:09:08,764 --> 00:09:09,264 fully 238 00:09:09,884 --> 00:09:11,105 LLM produced 239 00:09:11,644 --> 00:09:12,465 peer reviews. 240 00:09:13,725 --> 00:09:14,225 So 241 00:09:14,684 --> 00:09:18,285 I I'm not sure that the re the 242 00:09:18,285 --> 00:09:20,865 physical science research community, certainly the, 243 00:09:21,440 --> 00:09:23,299 some of the respondents to our survey, 244 00:09:24,000 --> 00:09:26,399 I'm not sure they are fully aware of 245 00:09:26,399 --> 00:09:29,139 how these tools work and what their drawbacks 246 00:09:29,200 --> 00:09:31,860 are, you know, where their blind spots are. 247 00:09:33,035 --> 00:09:34,554 Yeah. Well, that I mean, that sounds like 248 00:09:34,554 --> 00:09:37,175 a problem, but, I mean, I'm guessing, you 249 00:09:37,195 --> 00:09:39,674 know, you're you're dealing with, you know, with 250 00:09:39,674 --> 00:09:43,295 some, I suppose, smart tech savvy people. So 251 00:09:43,754 --> 00:09:46,415 at some point, I I the community will 252 00:09:46,850 --> 00:09:48,470 probably have a a better 253 00:09:48,850 --> 00:09:49,350 realization, 254 00:09:50,730 --> 00:09:53,830 of the issues involved and what AI can 255 00:09:54,290 --> 00:09:57,649 and can't do. Mhmm. Another interesting thing about 256 00:09:57,649 --> 00:09:59,875 the survey is that it it reveals a 257 00:09:59,875 --> 00:10:02,995 split in the views of early and later 258 00:10:02,995 --> 00:10:03,495 career 259 00:10:04,035 --> 00:10:04,535 researchers. 260 00:10:04,995 --> 00:10:06,855 And, you know, perhaps not surprisingly, 261 00:10:07,715 --> 00:10:11,095 early career researchers tend to be more positive 262 00:10:11,475 --> 00:10:12,455 about the impact 263 00:10:12,835 --> 00:10:13,575 of AI 264 00:10:14,269 --> 00:10:15,649 than their senior 265 00:10:16,029 --> 00:10:16,529 colleagues, 266 00:10:17,070 --> 00:10:20,830 whereas later career respondents tend to be more 267 00:10:20,830 --> 00:10:21,330 neutral 268 00:10:21,950 --> 00:10:22,929 about the possible 269 00:10:23,230 --> 00:10:23,730 impacts. 270 00:10:24,830 --> 00:10:25,970 I I mean, is that 271 00:10:26,669 --> 00:10:28,590 I mean, I know this is sort of 272 00:10:28,590 --> 00:10:29,090 stereotypical 273 00:10:30,054 --> 00:10:32,075 that, you know, maybe the early career 274 00:10:32,455 --> 00:10:34,855 researchers are are more savvy when it becomes 275 00:10:34,855 --> 00:10:37,575 to AI and and actually use it, where 276 00:10:37,575 --> 00:10:38,075 maybe 277 00:10:38,934 --> 00:10:41,274 people further on in their career aren't 278 00:10:42,215 --> 00:10:43,035 picking up 279 00:10:43,620 --> 00:10:45,700 new tools, you know, they're happy with the 280 00:10:45,700 --> 00:10:47,779 the way that they've done things in the 281 00:10:47,779 --> 00:10:50,259 past. Is is that the reason why, or 282 00:10:50,259 --> 00:10:52,899 is is that just a horrible stereotype that 283 00:10:52,899 --> 00:10:53,220 I've, 284 00:10:54,259 --> 00:10:55,720 that I've un unveiled? 285 00:10:56,804 --> 00:10:58,404 I think it's a very good question. I 286 00:10:58,404 --> 00:11:00,504 mean, we did this sub analysis, and, actually, 287 00:11:01,445 --> 00:11:02,825 we analyzed the question 288 00:11:03,205 --> 00:11:04,825 about what do you think, 289 00:11:05,445 --> 00:11:07,524 the impact will be. So that's where we 290 00:11:07,524 --> 00:11:09,625 looked at the the generational divide. 291 00:11:10,720 --> 00:11:12,960 And, yeah, you're completely right. So early career 292 00:11:12,960 --> 00:11:14,639 researchers were much more likely to have a 293 00:11:14,639 --> 00:11:17,360 positive view of the impact the future impact 294 00:11:17,360 --> 00:11:18,580 of generative AI. 295 00:11:19,920 --> 00:11:22,240 And it may well be because they're digital 296 00:11:22,240 --> 00:11:24,695 natives. They're generally just more comfortable with these 297 00:11:24,695 --> 00:11:26,235 kinds of online tools. 298 00:11:27,654 --> 00:11:29,035 But I think it's also 299 00:11:29,654 --> 00:11:32,715 important to consider that early career researchers now 300 00:11:32,774 --> 00:11:35,254 are starting their careers in a very different 301 00:11:35,254 --> 00:11:35,754 landscape. 302 00:11:36,370 --> 00:11:37,190 You know, academia, 303 00:11:38,929 --> 00:11:41,990 those early career researcher jobs look very different 304 00:11:42,210 --> 00:11:43,269 to how they did 305 00:11:44,129 --> 00:11:46,070 thirty, twenty, even ten 306 00:11:46,450 --> 00:11:49,029 years ago. The volume of, 307 00:11:49,804 --> 00:11:50,304 administrative 308 00:11:50,684 --> 00:11:53,644 tasks is is is much higher that lots 309 00:11:53,644 --> 00:11:55,424 more is expected of their time. 310 00:11:57,004 --> 00:11:58,304 There's a lot more precarity 311 00:11:59,004 --> 00:12:00,945 in their work, and and so 312 00:12:02,044 --> 00:12:02,639 it might 313 00:12:03,120 --> 00:12:05,279 be a product of the fact that early 314 00:12:05,279 --> 00:12:05,940 care researchers 315 00:12:06,799 --> 00:12:08,740 are less likely to have experienced 316 00:12:10,080 --> 00:12:12,720 peer review as it traditionally should be, which 317 00:12:12,720 --> 00:12:14,980 is a positive experience where, 318 00:12:16,004 --> 00:12:18,804 the authors receive really useful and supportive comments, 319 00:12:18,804 --> 00:12:20,804 which help them to improve their manuscript, which 320 00:12:20,804 --> 00:12:22,985 help them to improve as a researcher. 321 00:12:23,605 --> 00:12:24,105 And, 322 00:12:25,204 --> 00:12:26,884 you know, and then they can go on 323 00:12:26,884 --> 00:12:28,529 and get their paper published. And and, 324 00:12:29,809 --> 00:12:32,049 this is purely anecdotal. This is just my 325 00:12:32,049 --> 00:12:34,049 opinion, but I wonder if that's also a 326 00:12:34,049 --> 00:12:36,230 factor. They haven't experienced 327 00:12:37,730 --> 00:12:39,490 peer review in the same way that their 328 00:12:39,490 --> 00:12:40,870 senior colleagues have. 329 00:12:42,184 --> 00:12:44,105 I I think there are some very interesting 330 00:12:44,105 --> 00:12:44,605 dynamics 331 00:12:45,304 --> 00:12:45,784 in, 332 00:12:46,345 --> 00:12:47,485 you know, how people, 333 00:12:49,144 --> 00:12:52,184 adopt AI and why they adopt it. So, 334 00:12:52,584 --> 00:12:55,144 yeah, I suppose more more surveys are needed 335 00:12:55,144 --> 00:12:56,605 there as well. 336 00:12:57,660 --> 00:13:00,399 Absolutely. I mean, we you could just do 337 00:13:00,620 --> 00:13:02,540 you could ask so many questions. I think 338 00:13:02,540 --> 00:13:04,700 this is so fascinating, and things are moving 339 00:13:04,700 --> 00:13:05,440 so quickly. 340 00:13:06,620 --> 00:13:07,919 What I really enjoyed 341 00:13:08,299 --> 00:13:08,799 reading 342 00:13:09,100 --> 00:13:11,679 was the kind of, free text comments, 343 00:13:12,294 --> 00:13:12,794 because 344 00:13:14,134 --> 00:13:17,095 it was very clear looking at those. In 345 00:13:17,095 --> 00:13:18,774 fact, I can I can read you some 346 00:13:18,774 --> 00:13:19,434 of them, 347 00:13:20,294 --> 00:13:20,794 here? 348 00:13:21,254 --> 00:13:22,154 Oh, go ahead. 349 00:13:23,095 --> 00:13:24,075 So we asked 350 00:13:24,695 --> 00:13:25,195 respondents 351 00:13:25,870 --> 00:13:28,429 to tell us whether they thought there were 352 00:13:28,429 --> 00:13:31,329 any ethical issues around the use of generative 353 00:13:31,470 --> 00:13:33,809 AI in peer review, and 354 00:13:34,909 --> 00:13:37,230 there was a real range of responses. So 355 00:13:37,230 --> 00:13:38,850 I'm just gonna read you a couple. 356 00:13:39,629 --> 00:13:41,754 The theft of the corpus of data 357 00:13:42,134 --> 00:13:44,875 used to train AI models, the replacement 358 00:13:45,175 --> 00:13:48,375 of human labor, and the wasteful energy usage. 359 00:13:48,375 --> 00:13:49,514 That's one comment. 360 00:13:50,295 --> 00:13:51,514 Another comment was 361 00:13:51,894 --> 00:13:54,100 the main ethical issue is the transfer of 362 00:13:54,100 --> 00:13:56,980 responsibility over knowledge from a human intelligence to 363 00:13:56,980 --> 00:13:57,559 a nonbiological 364 00:13:57,940 --> 00:13:59,879 intelligence with unknown administrators 365 00:14:00,500 --> 00:14:01,320 or proprietries. 366 00:14:03,299 --> 00:14:05,240 And so these are quite 367 00:14:06,019 --> 00:14:07,320 existential ethical 368 00:14:07,875 --> 00:14:10,835 concerns, actually. They're not necessarily concerns about the 369 00:14:10,835 --> 00:14:11,335 capabilities 370 00:14:12,195 --> 00:14:12,934 of the software. 371 00:14:13,394 --> 00:14:14,535 Often, people 372 00:14:14,915 --> 00:14:15,415 who, 373 00:14:17,315 --> 00:14:19,554 are anti AI and don't like the idea 374 00:14:19,554 --> 00:14:21,394 of AI being used in peer review have 375 00:14:21,394 --> 00:14:22,535 quite deep seated, 376 00:14:23,820 --> 00:14:26,860 broader concerns than the fact that it might 377 00:14:26,860 --> 00:14:28,799 not produce very good reports. 378 00:14:29,580 --> 00:14:30,879 We also ask people, 379 00:14:32,139 --> 00:14:33,899 do you think you would be able to 380 00:14:33,899 --> 00:14:34,399 detect 381 00:14:34,860 --> 00:14:35,360 an 382 00:14:35,899 --> 00:14:36,720 AI authored, 383 00:14:37,500 --> 00:14:40,115 peer review reports if, you were an author 384 00:14:40,115 --> 00:14:41,654 and you received one on your manuscript. 385 00:14:42,595 --> 00:14:44,834 And and we'd ask them, what do you 386 00:14:44,834 --> 00:14:47,174 think the hallmarks are of these reports? 387 00:14:47,714 --> 00:14:48,214 And 388 00:14:48,754 --> 00:14:50,834 what came up again and again and again 389 00:14:50,834 --> 00:14:51,334 was 390 00:14:52,080 --> 00:14:54,160 these tools just do not have the depth 391 00:14:54,160 --> 00:14:55,540 of knowledge of an expert 392 00:14:56,160 --> 00:14:56,899 peer reviewer. 393 00:14:57,279 --> 00:14:59,540 And that was really the most common responses. 394 00:15:00,160 --> 00:15:02,639 They've seen these reports that are written by 395 00:15:02,639 --> 00:15:05,220 generous of AI tools, and they they're very 396 00:15:05,440 --> 00:15:06,180 high level. 397 00:15:06,865 --> 00:15:07,365 They 398 00:15:07,985 --> 00:15:11,024 make very broad generic statements, and they're just 399 00:15:11,024 --> 00:15:13,585 not useful. And it's it is quite obvious 400 00:15:13,585 --> 00:15:14,805 that they've been written 401 00:15:15,425 --> 00:15:18,865 by Generative AI. We see, unfortunately, we see 402 00:15:18,865 --> 00:15:19,925 quite a few fully 403 00:15:21,720 --> 00:15:24,840 generative AI written peer review reports, and we 404 00:15:24,840 --> 00:15:26,600 find the same thing. You know, they're just 405 00:15:26,600 --> 00:15:28,440 not up to scratch. They're not helpful, and, 406 00:15:28,440 --> 00:15:28,940 actually, 407 00:15:29,639 --> 00:15:31,559 we can spot them a mile off. That 408 00:15:31,559 --> 00:15:32,779 is not the same 409 00:15:33,720 --> 00:15:34,220 necessarily 410 00:15:35,240 --> 00:15:36,299 as reports 411 00:15:37,064 --> 00:15:38,524 that have been edited or augmented 412 00:15:39,704 --> 00:15:41,424 by AI. And that's where we get into 413 00:15:41,704 --> 00:15:43,865 well, there's different uses, and it's quite difficult 414 00:15:43,865 --> 00:15:46,444 to police how people are using them. Because 415 00:15:47,225 --> 00:15:48,365 when we ask people, 416 00:15:48,824 --> 00:15:49,725 in this survey, 417 00:15:50,730 --> 00:15:53,049 have you used generative AI tools, and if 418 00:15:53,049 --> 00:15:53,870 so, how? 419 00:15:55,370 --> 00:15:58,169 The most common response was, well, I wrote 420 00:15:58,169 --> 00:16:00,330 my review. Maybe they wrote it in kind 421 00:16:00,330 --> 00:16:02,394 of bullet points or in note form, and 422 00:16:02,394 --> 00:16:04,235 then they put the review into an a 423 00:16:04,394 --> 00:16:06,334 AI tool to improve flow and grammar. 424 00:16:07,514 --> 00:16:09,355 And so that was, you know, by far 425 00:16:09,355 --> 00:16:10,254 the most common, 426 00:16:10,714 --> 00:16:11,534 response, and 427 00:16:13,914 --> 00:16:16,315 that's a completely different use case to uploading 428 00:16:16,315 --> 00:16:19,269 a manuscript into chat GPT or whatever and 429 00:16:19,269 --> 00:16:21,610 asking it to to summarize to to 430 00:16:22,149 --> 00:16:22,970 write a review. 431 00:16:23,669 --> 00:16:24,169 Although, 432 00:16:25,269 --> 00:16:27,509 that was the second most reported use of 433 00:16:27,509 --> 00:16:29,450 AI tools was to, 434 00:16:30,335 --> 00:16:32,335 so 46 people said that they had they 435 00:16:32,335 --> 00:16:33,235 had used generative 436 00:16:33,535 --> 00:16:34,514 AI to 437 00:16:34,894 --> 00:16:37,875 digest or summarize an article under review. 438 00:16:40,014 --> 00:16:40,914 And that, 439 00:16:41,375 --> 00:16:44,014 for me, is a little bit concerning because 440 00:16:44,014 --> 00:16:44,995 that is not 441 00:16:45,455 --> 00:16:45,929 what 442 00:16:46,970 --> 00:16:49,789 current large language models are good at. 443 00:16:50,250 --> 00:16:52,110 In fact, there was a really interesting 444 00:16:53,049 --> 00:16:55,690 study that was published last week by, 445 00:16:56,409 --> 00:16:58,970 and colleagues at Sheffield University in Tokyo in 446 00:16:58,970 --> 00:16:59,470 Finland. 447 00:17:00,089 --> 00:17:00,589 And 448 00:17:01,565 --> 00:17:03,665 they asked they looked specifically at ChatGPT, 449 00:17:04,365 --> 00:17:05,825 and they asked ChatGPT 450 00:17:06,445 --> 00:17:08,305 to kind of summarize and critique 451 00:17:08,924 --> 00:17:11,325 a large number of manuscripts that had been 452 00:17:11,325 --> 00:17:13,920 retracted. So they'd been retracted for serious ethical 453 00:17:14,080 --> 00:17:14,580 concerns. 454 00:17:15,279 --> 00:17:17,359 And on the whole, chat GPT was very 455 00:17:17,359 --> 00:17:20,240 positive about these papers. It very rarely picked 456 00:17:20,240 --> 00:17:22,480 up on the issues that had led to 457 00:17:22,480 --> 00:17:23,140 the retractions. 458 00:17:24,160 --> 00:17:25,220 Generally speaking, 459 00:17:27,440 --> 00:17:28,580 it it tends 460 00:17:29,144 --> 00:17:31,005 to be very positive when you put 461 00:17:31,384 --> 00:17:34,045 a manuscript into one of these chatbots. 462 00:17:34,664 --> 00:17:36,285 You're gonna get a pretty positive, 463 00:17:37,225 --> 00:17:40,045 gentle peer review, which isn't very accurate. 464 00:17:40,585 --> 00:17:42,025 So do do you think there that, 465 00:17:43,225 --> 00:17:43,884 is the 466 00:17:44,210 --> 00:17:46,849 is chat GBT or the large language model 467 00:17:46,849 --> 00:17:47,750 trying to please, 468 00:17:48,769 --> 00:17:49,509 the person 469 00:17:49,809 --> 00:17:51,570 who's who's using it and, 470 00:17:52,210 --> 00:17:52,690 you know, saying, 471 00:17:54,690 --> 00:17:56,930 sort of couching any sort of review in 472 00:17:56,930 --> 00:17:57,990 positive language, 473 00:17:59,644 --> 00:18:01,804 because maybe that's more acceptable to, 474 00:18:02,525 --> 00:18:04,384 to the person who's making the request. 475 00:18:04,924 --> 00:18:07,644 Well, it's interesting you ask that because there 476 00:18:07,644 --> 00:18:08,384 was recently, 477 00:18:09,005 --> 00:18:09,904 in the news, 478 00:18:10,924 --> 00:18:11,849 reports that 479 00:18:12,650 --> 00:18:15,230 the new versions, particularly of ChatGPT, 480 00:18:16,169 --> 00:18:18,349 were to be made less sycophantic. 481 00:18:18,730 --> 00:18:19,630 So previous 482 00:18:20,089 --> 00:18:23,390 versions were very sycophantic, very, you know, positive 483 00:18:23,450 --> 00:18:26,650 about everything the user was saying. Maybe. Maybe 484 00:18:26,650 --> 00:18:27,470 it's that. 485 00:18:29,704 --> 00:18:31,865 I would imagine that in the study of 486 00:18:31,865 --> 00:18:32,365 retracted, 487 00:18:34,664 --> 00:18:37,224 manuscripts, though, where they asked ChatGPT to summarize 488 00:18:37,224 --> 00:18:39,565 the manuscripts, I would imagine the prompts 489 00:18:40,024 --> 00:18:42,190 the prompts would have been quite neutral. 490 00:18:43,210 --> 00:18:43,869 Who knows? 491 00:18:44,650 --> 00:18:46,809 Yeah. Well, that that's the problem, isn't it, 492 00:18:46,809 --> 00:18:48,029 that we don't know? 493 00:18:48,570 --> 00:18:50,250 And I I just wanted to check something 494 00:18:50,250 --> 00:18:51,549 with you, Laura. So 495 00:18:52,330 --> 00:18:54,589 if a reviewer were to upload, 496 00:18:55,825 --> 00:18:56,565 a manuscript, 497 00:18:57,424 --> 00:19:00,465 and use, let's say, chat GBT to provide 498 00:19:00,465 --> 00:19:02,384 them with a summary, that's something that we 499 00:19:02,384 --> 00:19:02,884 don't 500 00:19:03,424 --> 00:19:05,904 want people to be doing. Is that is 501 00:19:05,904 --> 00:19:06,725 that correct? 502 00:19:07,424 --> 00:19:08,484 So that summary 503 00:19:09,105 --> 00:19:10,005 being submitted 504 00:19:10,305 --> 00:19:12,000 as a peer review, 505 00:19:12,539 --> 00:19:15,180 no. That's not. And, actually, as I say, 506 00:19:15,180 --> 00:19:18,460 our policy also, doesn't prohibit the use of 507 00:19:18,460 --> 00:19:19,279 these tools 508 00:19:19,740 --> 00:19:21,519 to to edit reports either. 509 00:19:24,619 --> 00:19:26,319 Perhaps it's worth talking about 510 00:19:27,764 --> 00:19:28,904 people's feelings 511 00:19:29,524 --> 00:19:32,085 when we ask them to put their author 512 00:19:32,085 --> 00:19:32,825 hats on, 513 00:19:33,365 --> 00:19:35,524 and we ask them how would you feel 514 00:19:35,524 --> 00:19:36,744 if your manuscript 515 00:19:37,125 --> 00:19:39,625 was reviewed in full or in part 516 00:19:40,164 --> 00:19:42,679 by a large language model? And the responses 517 00:19:42,740 --> 00:19:43,720 were quite telling. 518 00:19:44,179 --> 00:19:44,579 So, 519 00:19:48,179 --> 00:19:49,220 57% 520 00:19:49,220 --> 00:19:51,079 of people said they would be unhappy 521 00:19:51,539 --> 00:19:54,179 if a reviewer used generative AI to write 522 00:19:54,179 --> 00:19:56,099 an entire pair of your report on a 523 00:19:56,099 --> 00:19:57,480 manuscript that they had coauthored. 524 00:19:57,835 --> 00:19:59,375 And when we ask people, okay, 525 00:19:59,674 --> 00:20:02,235 so that's writing a full report, how would 526 00:20:02,235 --> 00:20:03,295 you feel if, 527 00:20:03,835 --> 00:20:06,255 AI was used to augment or edit 528 00:20:06,555 --> 00:20:08,555 a pair of your report? And the response 529 00:20:08,555 --> 00:20:10,154 there was still quite high. It was in 530 00:20:10,154 --> 00:20:11,855 the forties of people saying, 531 00:20:12,369 --> 00:20:14,289 roughly 40% of people saying they wouldn't be 532 00:20:14,289 --> 00:20:15,429 comfortable with that. 533 00:20:16,210 --> 00:20:17,269 So, no, 534 00:20:17,890 --> 00:20:20,769 we definitely don't want fully AI authored peer 535 00:20:20,769 --> 00:20:23,250 review reports. There is a real quality issue, 536 00:20:23,250 --> 00:20:25,029 and there are lots of ethical issues. 537 00:20:25,409 --> 00:20:26,444 And we know that 538 00:20:26,924 --> 00:20:28,944 a majority of our authors 539 00:20:29,964 --> 00:20:31,424 wouldn't be happy with that. 540 00:20:31,884 --> 00:20:34,065 There's a whole other practical 541 00:20:35,005 --> 00:20:36,865 concern here as well, which is 542 00:20:37,484 --> 00:20:37,984 these 543 00:20:38,365 --> 00:20:38,865 tools 544 00:20:39,244 --> 00:20:39,984 are currently 545 00:20:40,559 --> 00:20:43,519 free, and they're available to anyone with an 546 00:20:43,519 --> 00:20:44,419 Internet connection. 547 00:20:46,000 --> 00:20:48,960 If an author wanted an AI authored peer 548 00:20:48,960 --> 00:20:49,779 review report, 549 00:20:50,240 --> 00:20:52,740 they can do that themselves, right, in seconds. 550 00:20:54,000 --> 00:20:55,154 That's not what 551 00:20:56,115 --> 00:20:57,494 they submit their manuscripts 552 00:20:57,795 --> 00:20:59,715 to journals for. That's not what they expect 553 00:20:59,715 --> 00:21:00,775 from peer review. 554 00:21:02,355 --> 00:21:02,855 And, 555 00:21:03,715 --> 00:21:04,215 again, 556 00:21:04,674 --> 00:21:06,535 reading into the free text comments, 557 00:21:07,715 --> 00:21:10,250 we have these two words which describe the 558 00:21:10,250 --> 00:21:11,710 process, peer review. 559 00:21:12,250 --> 00:21:14,329 And I think often people focus on the 560 00:21:14,329 --> 00:21:15,789 review, like this is about 561 00:21:16,089 --> 00:21:17,470 assessing scientific rigor. 562 00:21:17,849 --> 00:21:21,609 But the peer part is really important to 563 00:21:21,609 --> 00:21:23,609 people, and that came across loud and clear 564 00:21:23,609 --> 00:21:26,464 when people responded to to our questions because 565 00:21:27,085 --> 00:21:29,424 they really do feel strongly about 566 00:21:31,164 --> 00:21:33,644 this manuscript, this research that they've worked really 567 00:21:33,644 --> 00:21:35,505 hard on being assessed by 568 00:21:36,285 --> 00:21:38,765 someone who's an expert, a real person, perhaps 569 00:21:38,765 --> 00:21:40,880 someone that they know and have met at 570 00:21:40,880 --> 00:21:41,380 conferences, 571 00:21:41,839 --> 00:21:44,079 but certainly someone with the the depth of 572 00:21:44,079 --> 00:21:44,579 knowledge 573 00:21:45,119 --> 00:21:47,460 and passion about the field to give 574 00:21:48,240 --> 00:21:49,940 a good and useful review. 575 00:21:51,679 --> 00:21:53,835 And, Laura, I wanted to ask you about, 576 00:21:54,954 --> 00:21:57,274 well, I suppose the survey in general, but, 577 00:21:57,274 --> 00:21:59,434 you know, perhaps more about the written comments 578 00:21:59,434 --> 00:22:01,134 where I suppose you you're probably 579 00:22:01,674 --> 00:22:02,494 you're probably, 580 00:22:03,914 --> 00:22:05,615 learning about people's passions 581 00:22:05,994 --> 00:22:07,054 about AI. 582 00:22:07,880 --> 00:22:09,159 Do do you think that, 583 00:22:10,039 --> 00:22:12,220 these two different views, 584 00:22:12,839 --> 00:22:13,980 you know, this polarization, 585 00:22:14,359 --> 00:22:15,819 is it going to be difficult 586 00:22:16,440 --> 00:22:17,179 to reconcile 587 00:22:18,200 --> 00:22:18,519 as, 588 00:22:19,159 --> 00:22:20,940 as we go forward? Because, 589 00:22:21,265 --> 00:22:23,984 well, AI is it's here to stay. It's 590 00:22:23,984 --> 00:22:25,044 a useful tool. 591 00:22:25,424 --> 00:22:27,744 People are using it more and more every 592 00:22:27,744 --> 00:22:30,484 day and learning how to use it effectively. 593 00:22:31,585 --> 00:22:33,825 Is it gonna be difficult to reconcile the 594 00:22:33,825 --> 00:22:36,724 views of the sort of never AI people 595 00:22:37,210 --> 00:22:40,029 with the, well, I can use AI responsibly 596 00:22:40,410 --> 00:22:42,970 when I do a peer review people. Is 597 00:22:42,970 --> 00:22:44,110 that gonna be tough? 598 00:22:44,809 --> 00:22:45,309 So 599 00:22:46,009 --> 00:22:47,390 I actually think the opposite. 600 00:22:47,930 --> 00:22:50,330 I actually think that there's a lot of 601 00:22:50,330 --> 00:22:52,315 noise at the moment, and there's been a 602 00:22:52,315 --> 00:22:53,454 lot of rapid change, 603 00:22:53,914 --> 00:22:55,774 and that will lead to divisions 604 00:22:56,875 --> 00:22:58,335 and differences of opinion. 605 00:22:59,115 --> 00:23:01,194 But as much as I sound like I've 606 00:23:01,194 --> 00:23:03,274 been quite hard on these tools and and 607 00:23:03,274 --> 00:23:05,294 quite negative about them, I am an optimist. 608 00:23:06,200 --> 00:23:08,519 And I do believe in the power of 609 00:23:08,519 --> 00:23:09,019 communities 610 00:23:09,960 --> 00:23:11,640 to kind of reach a consensus and reach 611 00:23:11,640 --> 00:23:12,220 some equilibrium. 612 00:23:14,920 --> 00:23:17,660 The way that these tools have been presented 613 00:23:19,544 --> 00:23:21,785 from the top down has often been quite 614 00:23:21,785 --> 00:23:22,285 fatalistic. 615 00:23:22,904 --> 00:23:23,404 So, 616 00:23:24,345 --> 00:23:26,585 the line seems to have been in a 617 00:23:26,585 --> 00:23:27,404 lot of cases. 618 00:23:28,025 --> 00:23:29,785 These tools are here now, and if you 619 00:23:29,785 --> 00:23:31,224 don't get with the program, if you don't 620 00:23:31,224 --> 00:23:32,765 use them, you're gonna be left behind. 621 00:23:35,279 --> 00:23:36,480 And a lot of people have got on 622 00:23:36,480 --> 00:23:38,159 board with that, and they're happy with that. 623 00:23:38,159 --> 00:23:39,679 They think, well, you you know, it's true. 624 00:23:39,679 --> 00:23:41,039 You can't put the genie back in the 625 00:23:41,039 --> 00:23:43,359 bottle. Let's just start using them. But there's, 626 00:23:43,359 --> 00:23:45,519 you know, there's a backlash against that view 627 00:23:45,519 --> 00:23:47,919 as well with communities and people saying, well, 628 00:23:47,919 --> 00:23:49,450 no. We do have a choice, and we 629 00:23:49,450 --> 00:23:49,950 we 630 00:23:51,095 --> 00:23:52,855 can think in a more nuanced way about 631 00:23:52,855 --> 00:23:53,515 the ethics 632 00:23:54,134 --> 00:23:55,975 and how we want these tools to be 633 00:23:55,975 --> 00:23:57,755 used. And I think, fundamentally, 634 00:23:58,535 --> 00:23:59,835 throughout human history, 635 00:24:01,335 --> 00:24:03,894 the way that tools are adopted and used 636 00:24:03,894 --> 00:24:05,434 and whether they become widespread 637 00:24:06,210 --> 00:24:08,369 is not something that's dictated from the top 638 00:24:08,369 --> 00:24:08,869 down. 639 00:24:09,250 --> 00:24:11,650 It's something that's dictated from the bottom up 640 00:24:11,650 --> 00:24:12,630 through communities 641 00:24:13,890 --> 00:24:14,390 who 642 00:24:14,849 --> 00:24:16,710 get together, test out tools, 643 00:24:17,490 --> 00:24:19,730 and decide whether these tools are gonna help 644 00:24:19,730 --> 00:24:21,269 them achieve their goals. 645 00:24:21,575 --> 00:24:23,255 And I think that that is what's going 646 00:24:23,255 --> 00:24:23,835 to happen 647 00:24:24,214 --> 00:24:25,755 with a little bit more time 648 00:24:26,454 --> 00:24:29,255 in the physical science research community. You know? 649 00:24:29,255 --> 00:24:31,275 Ultimately, the community is going to decide, 650 00:24:31,815 --> 00:24:33,434 and and we, as publishers, 651 00:24:33,894 --> 00:24:35,150 need to follow their lead. 652 00:24:36,009 --> 00:24:38,509 In the short term, when we are seeing 653 00:24:38,890 --> 00:24:41,289 a divergence of views, what we need to 654 00:24:41,289 --> 00:24:42,730 be careful to do is make sure we 655 00:24:42,730 --> 00:24:44,590 bring everyone along with us. 656 00:24:45,289 --> 00:24:47,450 And that includes, you know, different groups. We've 657 00:24:47,450 --> 00:24:49,755 discussed how there's a generational divide. Okay. How 658 00:24:49,755 --> 00:24:51,034 do we do this? How do we have 659 00:24:51,034 --> 00:24:51,534 policies 660 00:24:52,075 --> 00:24:54,634 that are inclusive for early career researchers and 661 00:24:54,634 --> 00:24:55,454 senior researchers? 662 00:24:56,075 --> 00:24:56,575 And 663 00:24:58,394 --> 00:24:59,994 the ability to opt in and out will 664 00:24:59,994 --> 00:25:00,654 be important, 665 00:25:01,069 --> 00:25:03,809 but the key thing will be transparency. 666 00:25:04,750 --> 00:25:05,250 So, 667 00:25:06,670 --> 00:25:07,650 real transparency 668 00:25:08,429 --> 00:25:08,929 around 669 00:25:09,230 --> 00:25:11,730 how these AI models are being used, 670 00:25:13,674 --> 00:25:16,714 and how we as publishers are using AI 671 00:25:16,714 --> 00:25:18,255 in our processes as well. 672 00:25:18,875 --> 00:25:20,555 I see. Yeah. Because, I mean, it's, you 673 00:25:20,555 --> 00:25:22,154 know, it's not just us, is it? It's 674 00:25:22,154 --> 00:25:24,575 not just the scientific community that's grappling 675 00:25:25,275 --> 00:25:28,420 with AI and sort of an overabundance 676 00:25:28,880 --> 00:25:29,380 of 677 00:25:29,680 --> 00:25:30,180 information, 678 00:25:30,720 --> 00:25:33,039 some of it very poor. This is, you 679 00:25:33,039 --> 00:25:35,380 know, this is the issue, isn't it, that's 680 00:25:35,440 --> 00:25:36,500 facing society 681 00:25:37,279 --> 00:25:38,100 at the moment? 682 00:25:38,994 --> 00:25:41,255 So, you know, really, we're not alone 683 00:25:41,714 --> 00:25:42,375 in this. 684 00:25:42,755 --> 00:25:44,515 But I I on the other hand, I 685 00:25:44,515 --> 00:25:46,434 may I suppose that, you know, we could 686 00:25:46,434 --> 00:25:47,335 lead the way 687 00:25:47,634 --> 00:25:48,375 as scientific 688 00:25:48,914 --> 00:25:50,054 publishers providing 689 00:25:50,430 --> 00:25:50,930 a 690 00:25:52,190 --> 00:25:54,190 a a a way of using AI in 691 00:25:54,190 --> 00:25:57,570 a responsible way to to to process information 692 00:25:57,710 --> 00:25:59,570 and and make the world a better place. 693 00:26:00,670 --> 00:26:01,170 Absolutely. 694 00:26:01,470 --> 00:26:02,130 I think 695 00:26:02,430 --> 00:26:04,450 what the future might hold is 696 00:26:05,894 --> 00:26:08,774 tools that are data safe, that are ethical, 697 00:26:08,774 --> 00:26:11,255 that protect research integrity, and that are kind 698 00:26:11,255 --> 00:26:13,494 of embedded within our systems. So there's, you 699 00:26:13,494 --> 00:26:14,634 know, complete transparency 700 00:26:15,734 --> 00:26:18,154 and safety in the way that they are 701 00:26:18,454 --> 00:26:18,954 used. 702 00:26:19,279 --> 00:26:20,179 And as I 703 00:26:20,679 --> 00:26:21,179 say, 704 00:26:21,919 --> 00:26:22,419 allowing, 705 00:26:23,679 --> 00:26:25,440 researchers to opt in or opt out of 706 00:26:25,440 --> 00:26:27,200 their use. So there will still be people 707 00:26:27,200 --> 00:26:29,359 with strong views, and they should be able 708 00:26:29,359 --> 00:26:32,500 to to bypass this if they want to. 709 00:26:34,734 --> 00:26:36,914 So exciting times as usual, 710 00:26:37,615 --> 00:26:40,494 I suppose, in the publishing industry. Thanks, Laura. 711 00:26:40,494 --> 00:26:43,134 Thanks so much for, coming on the Physics 712 00:26:43,134 --> 00:26:43,875 World podcast, 713 00:26:44,174 --> 00:26:45,539 Physics World weekly podcast, 714 00:26:59,384 --> 00:27:02,525 That was Laura Fietham Walker, who is reviewer 715 00:27:02,825 --> 00:27:05,644 engagement manager at IOP publishing. 716 00:27:06,505 --> 00:27:10,184 The reviewer survey is called AI and peer 717 00:27:10,184 --> 00:27:12,125 review 2025, 718 00:27:12,289 --> 00:27:14,529 and it can be found on the IOP 719 00:27:14,529 --> 00:27:15,669 publishing website. 720 00:27:16,210 --> 00:27:18,289 I'll put a link to the report in 721 00:27:18,289 --> 00:27:19,669 the podcast notes. 722 00:27:20,130 --> 00:27:21,970 I'm afraid that's all the time we have 723 00:27:21,970 --> 00:27:23,190 for this week's podcast. 724 00:27:23,569 --> 00:27:25,589 Thanks to Laura for a fascinating 725 00:27:25,890 --> 00:27:28,855 discussion, and a special thanks to our producer, 726 00:27:29,315 --> 00:27:30,294 Fred Isles. 727 00:27:30,835 --> 00:27:33,315 We'll be back again next week. See you 728 00:27:33,315 --> 00:27:33,815 then.