<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "https://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article article-type="editorial" dtd-version="1.1" specific-use="sps-1.9" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
	<front>
		<journal-meta>
			<journal-id journal-id-type="publisher-id">ijeph</journal-id>
			<journal-title-group>
				<journal-title>Interdisciplinary Journal of Epidemiology and Public Health</journal-title>
				<abbrev-journal-title abbrev-type="publisher">Interdiscipl. J. Epidemiol. Public Health</abbrev-journal-title>
			</journal-title-group>
			<issn pub-type="ppub">2665-427X</issn>
			<publisher>
				<publisher-name>Facultad Ciencias de la Salud, Universidad Libre</publisher-name>
			</publisher>
		</journal-meta>
		<article-meta>
			<article-id pub-id-type="doi">10.18041/2665-427X/ijeph.1.11268</article-id>
			<article-categories>
				<subj-group subj-group-type="heading">
					<subject>Editorial</subject>
				</subj-group>
			</article-categories>
			<title-group>
				<article-title>Human intelligence for authors, reviewers and editors using artificial intelligence</article-title>
			</title-group>
			<contrib-group>
				<contrib contrib-type="author">
					<contrib-id contrib-id-type="orcid">0000-0001-8091-9954</contrib-id>
					<name>
						<surname>Palacios Gomez</surname>
						<given-names>Mauricio</given-names>
					</name>
					<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
				</contrib>
				<aff id="aff1">
					<label>1</label>
					<institution content-type="original"> Editor en jefe de la Revista Colombia Médica, Facultad de salud, Universidad del Valle, Cali, Colombia.</institution>
					<institution content-type="normalized">Universidad del Valle</institution>
					<institution content-type="orgdiv2">Revista Colombia Médica</institution>
					<institution content-type="orgdiv1">Facultad de salud</institution>
					<institution content-type="orgname">Universidad del Valle</institution>
					<addr-line>
						<city>Cali</city>
					</addr-line>
					<country country="CO">Colombia</country>
				</aff>
			</contrib-group>
			<author-notes>
				<corresp id="c1">
					<label> Correspondence:</label> Mauricio Palacios Gomez. E-mail: <email>mao.palacios@correounivalle.edu.co</email>
				</corresp>
			</author-notes>
			<pub-date date-type="pub" publication-format="electronic">
				<day>19</day>
				<month>03</month>
				<year>2024</year>
			</pub-date>
			<pub-date date-type="collection" publication-format="electronic">
				<season>Jan-Jun</season>
				<year>2024</year>
			</pub-date>
			<volume>7</volume>
			<issue>1</issue>
			<elocation-id>e-11268</elocation-id>
			<permissions>
				<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/" xml:lang="en">
					<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License</license-p>
				</license>
			</permissions>
			<counts>
				<fig-count count="0"/>
				<table-count count="2"/>
				<equation-count count="0"/>
				<ref-count count="21"/>
				<page-count count="0"/>
			</counts>
		</article-meta>
	</front>
	<body>
		<p> Editorial published with permission of the editor of Colombia Medica journal. Previously published in <ext-link ext-link-type="uri" xlink:href="https://colombiamedica.univalle.edu.co/index.php/comedica/article/view/5867 ">https://colombiamedica.univalle.edu.co/index.php/comedica/article/view/5867</ext-link>. The last two paragraphs were modified to make the editorial more generic </p>
		<p>We call artificial intelligence any machine that processes information with some purpose, complying with the logical rules of Turing's computation described more than 70 years ago <xref ref-type="bibr" rid="B1"><sup>1</sup></xref>. These machines work with instructions called algorithms, a finite and well-defined sequence of information processing implemented by automata (computers) or any digital technology to optimize a process <xref ref-type="bibr" rid="B2"><sup>2</sup></xref>. This means that the purpose of artificial intelligence is optimization.</p>
		<p>Optimization is the ability to do or solve something in the most efficient way possible and, in the best case, using the least amount of resources. The intended optimization is programmed and preset by humans; therefore, these technologies are tools humans create for human purposes <xref ref-type="bibr" rid="B3"><sup>3</sup></xref>.</p>
		<p>The optimization capability of artificial intelligence is staggering. It is estimated that using artificial intelligence will facilitate the achievement of 134 of the 169 goals agreed in the 2030 Agenda for Sustainable Development <sup>(</sup><xref ref-type="bibr" rid="B4"><sup>4</sup></xref>. However, in this evaluation, it was projected that it could negatively affect the progress of 59 goals of the same agreement, being social, economic, educational, legal and gender inequality, the phenomenon most affected by artificial intelligence.</p>
		<p>This projection shows us that it is necessary to counterbalance the development and implementation of processes mediated by artificial intelligence, to maintain reflection and question the influence of these technological tools, and, above all, to be based on human intelligence. A definition of human intelligence in the data science and artificial intelligence environment would be a collection of contextual tacit knowledge about human values, responsibility, empathy, intuition, or care for another living being that algorithms cannot describe or execute <xref ref-type="bibr" rid="B5"><sup>5</sup></xref>. </p>
		<p>Improving the care capabilities of health systems, having more accurate diagnoses, achieving the optimization of medical treatments, and generating more efficient and appropriate public health measures are the promises of the advances of artificial intelligence. The World Health Organization recognizes these expectations but warns of the need to guarantee transparency, explainability and understanding of each application based on artificial intelligence implemented in health, with permanent evaluation, ensuring equity, inclusion, and sustainability <xref ref-type="bibr" rid="B6"><sup>6</sup></xref>.</p>
		<p>Artificial intelligence is already part of the research supporting the manuscripts submitted to the editorial process for scientific journals in the health area. Fortunately, we have guidelines for authors to submit their manuscripts in total; these allow peer review and the editors' judgment to better decide their publication. So far, the Equator Network website has published twelve guidelines for artificial intelligence-based research manuscripts, and in all of them, concern for transparency about the population from which the data were acquired, the design and development of the algorithm, the training of the model; and the external validity of the optimized processes are present (<xref ref-type="table" rid="t1">Table 1</xref>).</p>
		<p>
			<table-wrap id="t1">
				<label> Table 1</label>
				<caption>
					<title>Publication guidelines for artificial intelligence research manuscripts are available on the Equator Network website</title>
				</caption>
				<table>
					<colgroup>
						<col/>
						<col/>
						<col/>
					</colgroup>
					<thead>
						<tr>
							<th align="left">Guideliness</th>
							<th align="left">Name</th>
							<th align="left">Date</th>
						</tr>
					</thead>
					<tbody>
						<tr>
							<td align="left">PRIME</td>
							<td align="left">oritmoCardiovascular Imaging-Related Machine Learning Evaluation</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B10"><sup>10</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">MI-CLAIM</td>
							<td align="left">clinical artificial intelligence modeling</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B11"><sup>11</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left"> </td>
							<td align="left">Artificial intelligence in dental research</td>
							<td align="left">2021 <xref ref-type="bibr" rid="B12"><sup>12</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">SPIRIT-AI</td>
							<td align="left">Guidelines for clinical trial protocols for interventions involving artificial intelligence</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B13"><sup>13</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">CONSORT-AI</td>
							<td align="left">Reporting guidelines for clinical trial reports for interventions involving artificial intelligence</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B14"><sup>14</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">MINIMAR</td>
							<td align="left">reporting standards for artificial intelligence in health care</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B15"><sup>15</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">CAIR</td>
							<td align="left">guideline of Clinical AI Research</td>
							<td align="left">2021 <xref ref-type="bibr" rid="B16"><sup>16</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">CLEAR</td>
							<td align="left">EvaluAtion of Radiomics research</td>
							<td align="left">2023 <xref ref-type="bibr" rid="B17"><sup>17</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left"> </td>
							<td align="left">reporting machine learning analyses in clinical research</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B18"><sup>18</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">CLAIM</td>
							<td align="left">Checklist for Artificial Intelligence in Medical Imaging</td>
							<td align="left">2020 <xref ref-type="bibr" rid="B19"><sup>19</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">DECIDE-AI</td>
							<td align="left">guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence</td>
							<td align="left">2022 <xref ref-type="bibr" rid="B20"><sup>20</sup></xref>
							</td>
						</tr>
						<tr>
							<td align="left">STREAM-URO</td>
							<td align="left">Reporting of Machine Learning Applications in Urology</td>
							<td align="left">2021 <xref ref-type="bibr" rid="B21"><sup>21</sup></xref>
							</td>
						</tr>
					</tbody>
				</table>
			</table-wrap>
		</p>
		<p>However, the writing and editorial process does not have the same guidelines. Authors, peer reviewers and editors are surprised by algorithms that promise efficiency in their work. This fascination leads us to the risk of an absolute trust in artificial intelligence, known as algorithmmocracy, a government where humans and machines obey algorithms <xref ref-type="bibr" rid="B2"><sup>2</sup></xref>.</p>
		<p>We have signs that algorithms are not ideal in scientific publishing. For years, we have been questioning the use of algorithms with which bibliometric indexes classify (or disqualify?) scientific journals, but we accept that research supervisory bodies consider them the gold standard for measuring scientific productivity. Authors frequently resort to artificial intelligence writing tools, such as ChatGPT, Bard and Bing, with little reflection on their limitations, which may generate factual and reasoning errors in scientific writing <xref ref-type="bibr" rid="B7"><sup>7</sup></xref>. Editors may mistakenly accept the similarity percentage issued by anti-plagiarism algorithms as a rule in the evaluation of the originality of a manuscript, completely replacing expert judgment. Whenever artificial intelligence optimization is used, it should be remembered that technology does not change society; human intelligence defines the creation of applications, their use and how they affect society. The opposite is to accept the thesis of technological determinism, and although it will not lead us to an apocalyptic future like the one proposed by Skynet in the Terminator saga, it will affect equality, truth and the originality of science <xref ref-type="bibr" rid="B8"><sup>8</sup></xref>.</p>
		<p> The editorial guidelines of journals should accept the use of artificial intelligence in research, as well as the authors' adherence to the publication guidelines for AI-based research available on the <italic>Equator Network</italic> website and these should be a standard for journals. </p>
		<p> In addition, journals that invoke the ICMJE (International Committee of Medical Journal Editors) and the WAME (World Association of Medical Editors) to adjust the ethical processes, editorial flow and author guidelines of publications should also adopt the recommendations regarding the definition of authorship and the use of artificial intelligence programs for the elaboration and review of manuscripts submitted to journals <xref ref-type="bibr" rid="B9"><sup>9</sup></xref>. These recommendations, which are explained in an article reproduced from the WAME, are: </p>
		<p>
			<list list-type="bullet">
				<list-item>
					<p>- Non-human authors are not accepted.</p>
				</list-item>
				<list-item>
					<p>- Authors should be transparent when using chatbots and provide information on their use.</p>
				</list-item>
				<list-item>
					<p>- Authors are responsible for the information produced with a chatbot in their article (including accuracy and absence of plagiarism) and for proper attribution of all sources.</p>
				</list-item>
				<list-item>
					<p>- Reviewers and editors should advise authors if they used chatbots in evaluating the manuscript and generating revisions and correspondence. They should also explain how they used them.</p>
				</list-item>
				<list-item>
					<p>- Editors need appropriate tools to help them detect AI-generated or AI-altered content for the sake of science and the public and to help ensure the integrity of health information and reduce the risk of adverse health outcomes.</p>
				</list-item>
			</list>
		</p>
		<p>Colophon: If artificial intelligence optimizes our work, why do we have less free time?</p>
	</body>
	<back>
		<ref-list>
			<title>References</title>
			<ref id="B1">
				<label>1</label>
				<mixed-citation>1. Danziger S. Intelligence as a social concept: a socio-technological interpretation of the turing test. Philos Technol. 2022; 35(3): 1-26. Doi: 10.1007/s13347-022-00561-z</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Danziger</surname>
							<given-names>S</given-names>
						</name>
					</person-group>
					<article-title>Intelligence as a social concept a socio-technological interpretation of the turing test</article-title>
					<source>Philos Technol</source>
					<year>2022</year>
					<volume>35</volume>
					<issue>3</issue>
					<fpage>1</fpage>
					<lpage>26</lpage>
					<pub-id pub-id-type="doi">10.1007/s13347-022-00561-z</pub-id>
				</element-citation>
			</ref>
			<ref id="B2">
				<label>2</label>
				<mixed-citation>2. Astobiza AM. Ética algorítmica: Implicaciones éticas de una sociedad cada vez más gobernada por algoritmos. Dilemata. 2017; (24): 185-217.</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Astobiza</surname>
							<given-names>AM</given-names>
						</name>
					</person-group>
					<article-title>Ética algorítmica Implicaciones éticas de una sociedad cada vez más gobernada por algoritmos</article-title>
					<source>Dilemata</source>
					<year>2017</year>
					<issue>24</issue>
					<fpage>185</fpage>
					<lpage>217</lpage>
				</element-citation>
			</ref>
			<ref id="B3">
				<label>3</label>
				<mixed-citation>3. Hanna R, Kazim E. Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach. AI Ethics. 2021; 1(4): 405-23. Doi: 10.1007/s43681-021-00040-9</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Hanna</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Kazim</surname>
							<given-names>E</given-names>
						</name>
					</person-group>
					<article-title>Philosophical foundations for digital ethics and AI Ethics a dignitarian approach</article-title>
					<source>AI Ethics</source>
					<year>2021</year>
					<volume>1</volume>
					<issue>4</issue>
					<fpage>405</fpage>
					<lpage>423</lpage>
					<pub-id pub-id-type="doi">10.1007/s43681-021-00040-9</pub-id>
				</element-citation>
			</ref>
			<ref id="B4">
				<label>4</label>
				<mixed-citation>4. Vinuesa R, Azizpour H, Leite I, Balaam M, Dignum V, Domisch S, et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun. 2020; 11(1): 233. Doi: 10.1038/s41467-019-14108-y</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Vinuesa</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Azizpour</surname>
							<given-names>H</given-names>
						</name>
						<name>
							<surname>Leite</surname>
							<given-names>I</given-names>
						</name>
						<name>
							<surname>Balaam</surname>
							<given-names>M</given-names>
						</name>
						<name>
							<surname>Dignum</surname>
							<given-names>V</given-names>
						</name>
						<name>
							<surname>Domisch</surname>
							<given-names>S</given-names>
						</name>
					</person-group>
					<article-title>The role of artificial intelligence in achieving the Sustainable Development Goals</article-title>
					<source>Nat Commun</source>
					<year>2020</year>
					<volume>11</volume>
					<issue>1</issue>
					<fpage>233</fpage>
					<lpage>233</lpage>
					<pub-id pub-id-type="doi">10.1038/s41467-019-14108-y</pub-id>
				</element-citation>
			</ref>
			<ref id="B5">
				<label>5</label>
				<mixed-citation>5. Özdemir V. Not all intelligence is artificial: data science, automation, and AI meet HI. OMICS. 2019; 23(2): 67-9. Doi: 10.1089/omi.2019.0003</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Özdemir</surname>
							<given-names>V</given-names>
						</name>
					</person-group>
					<article-title>Not all intelligence is artificial data science, automation, and AI meet HI</article-title>
					<source>OMICS</source>
					<year>2019</year>
					<volume>23</volume>
					<issue>2</issue>
					<fpage>67</fpage>
					<lpage>69</lpage>
					<pub-id pub-id-type="doi">10.1089/omi.2019.0003</pub-id>
				</element-citation>
			</ref>
			<ref id="B6">
				<label>6</label>
				<mixed-citation>6. WHO. Ethics and governance of artificial intelligence for health: WHO guidance. Geneve: World Health Organization; 2021. Cited 2023 Sep 29. Available from: <ext-link ext-link-type="uri" xlink:href="http://apps.who.int/bookorders">http://apps.who.int/bookorders</ext-link>
				</mixed-citation>
				<element-citation publication-type="book">
					<person-group person-group-type="author">
						<collab>WHO</collab>
					</person-group>
					<source>Ethics and governance of artificial intelligence for health: WHO guidance</source>
					<year>2021</year>
					<publisher-loc>Geneve</publisher-loc>
					<publisher-name>World Health Organization</publisher-name>
					<date-in-citation content-type="access-date" iso-8601-date="2023-09-29">2023 Sep 29</date-in-citation>
					<comment>Available from: <ext-link ext-link-type="uri" xlink:href="http://apps.who.int/bookorders">http://apps.who.int/bookorders</ext-link>
					</comment>
				</element-citation>
			</ref>
			<ref id="B7">
				<label>7</label>
				<mixed-citation>7. Herbold S, Hautli-Janisz A, Heuer U, Kikteva Z, Trautsch A. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep. 2023; 13(1): 18617. Doi: 10.1038/s41598-023-45644-9</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Herbold</surname>
							<given-names>S</given-names>
						</name>
						<name>
							<surname>Hautli-Janisz</surname>
							<given-names>A</given-names>
						</name>
						<name>
							<surname>Heuer</surname>
							<given-names>U</given-names>
						</name>
						<name>
							<surname>Kikteva</surname>
							<given-names>Z</given-names>
						</name>
						<name>
							<surname>Trautsch</surname>
							<given-names>A</given-names>
						</name>
					</person-group>
					<article-title>A large-scale comparison of human-written versus ChatGPT-generated essays</article-title>
					<source>Sci Rep</source>
					<year>2023</year>
					<volume>13</volume>
					<issue>1</issue>
					<fpage>18617</fpage>
					<lpage>18617</lpage>
					<pub-id pub-id-type="doi">10.1038/s41598-023-45644-9</pub-id>
				</element-citation>
			</ref>
			<ref id="B8">
				<label>8</label>
				<mixed-citation>8. Kar P. Technology and the NHS-a world of false promises? BMJ. 2019; 367: l6135. Doi: 10.1136/bmj.l6135</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Kar</surname>
							<given-names>P</given-names>
						</name>
					</person-group>
					<article-title>Technology and the NHS-a world of false promises</article-title>
					<source>BMJ</source>
					<year>2019</year>
					<volume>367</volume>
					<fpage>l6135</fpage>
					<lpage>l6135</lpage>
					<pub-id pub-id-type="doi">10.1136/bmj.l6135</pub-id>
				</element-citation>
			</ref>
			<ref id="B9">
				<label>9</label>
				<mixed-citation>9. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña Jr JF,et al. Chatbots, IA Generativa y Manuscritos Académicos : Recomendaciones de WAME sobre “chatbots” e inteligencia artificial generativa en relación con las publicaciones académicas. Colomb Med (Cali). 2023;54(3): e1015868. Doi: 10.25100/cm.v54i3.5868.</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Zielinski</surname>
							<given-names>C</given-names>
						</name>
						<name>
							<surname>Winker</surname>
							<given-names>MA</given-names>
						</name>
						<name>
							<surname>Aggarwal</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Ferris</surname>
							<given-names>LE</given-names>
						</name>
						<name>
							<surname>Heinemann</surname>
							<given-names>M</given-names>
						</name>
						<name>
							<surname>Lapeña</surname>
							<given-names>JF</given-names>
							<suffix>Jr</suffix>
						</name>
						<etal/>
					</person-group>
					<article-title>Chatbots, IA Generativa y Manuscritos Académicos : Recomendaciones de WAME sobre “chatbots” e inteligencia artificial generativa en relación con las publicaciones académicas</article-title>
					<source>Colomb Med (Cali)</source>
					<year>2023</year>
					<volume>54</volume>
					<issue>3</issue>
					<elocation-id>e1015868</elocation-id>
					<pub-id pub-id-type="doi">10.25100/cm.v54i3.5868</pub-id>
				</element-citation>
			</ref>
			<ref id="B10">
				<label>10</label>
				<mixed-citation>10. Sengupta PP, Shrestha S, Berthon B, Messas E, Donal E, Tison GH, et al. Proposed requirements for cardiovascular imaging-related machine learning evaluation (PRIME): A checklist: reviewed by the American College of Cardiology Healthcare Innovation Council. JACC Cardiovasc Imaging. 2020; 13(9): 2017. Doi: 10.1016/j.jcmg.2020.07.015</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Sengupta</surname>
							<given-names>PP</given-names>
						</name>
						<name>
							<surname>Shrestha</surname>
							<given-names>S</given-names>
						</name>
						<name>
							<surname>Berthon</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Messas</surname>
							<given-names>E</given-names>
						</name>
						<name>
							<surname>Donal</surname>
							<given-names>E</given-names>
						</name>
						<name>
							<surname>Tison</surname>
							<given-names>GH</given-names>
						</name>
					</person-group>
					<article-title>Proposed requirements for cardiovascular imaging-related machine learning evaluation (PRIME) A checklist: reviewed by the American College of Cardiology Healthcare Innovation Council</article-title>
					<source>JACC Cardiovasc Imaging</source>
					<year>2020</year>
					<volume>13</volume>
					<issue>9</issue>
					<fpage>2017</fpage>
					<lpage>2017</lpage>
					<pub-id pub-id-type="doi">10.1016/j.jcmg.2020.07.015</pub-id>
				</element-citation>
			</ref>
			<ref id="B11">
				<label>11</label>
				<mixed-citation>11. Norgeot B, Quer G, Beaulieu-Jones BK, Torkamani A, Dias R, Gianfrancesco M, et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med. 2020; 26(9): 1320. Doi: 10.1038/s41591-020-1041-y</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Norgeot</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Quer</surname>
							<given-names>G</given-names>
						</name>
						<name>
							<surname>Beaulieu-Jones</surname>
							<given-names>BK</given-names>
						</name>
						<name>
							<surname>Torkamani</surname>
							<given-names>A</given-names>
						</name>
						<name>
							<surname>Dias</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Gianfrancesco</surname>
							<given-names>M</given-names>
						</name>
					</person-group>
					<article-title>Minimum information about clinical artificial intelligence modeling the MI-CLAIM checklist</article-title>
					<source>Nat Med</source>
					<year>2020</year>
					<volume>26</volume>
					<issue>9</issue>
					<fpage>1320</fpage>
					<lpage>1320</lpage>
					<pub-id pub-id-type="doi">10.1038/s41591-020-1041-y</pub-id>
				</element-citation>
			</ref>
			<ref id="B12">
				<label>12</label>
				<mixed-citation>12. Schwendicke F, Singh T, Lee JH, Gaudin R, Chaurasia A, Wiegand T, et al. Artificial intelligence in dental research: Checklist for authors, reviewers, readers. J Dent. 2021;107: 103610. Doi: 10.1016/j.jdent.2021.103610 PMid:33631303</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Schwendicke</surname>
							<given-names>F</given-names>
						</name>
						<name>
							<surname>Singh</surname>
							<given-names>T</given-names>
						</name>
						<name>
							<surname>Lee</surname>
							<given-names>JH</given-names>
						</name>
						<name>
							<surname>Gaudin</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Chaurasia</surname>
							<given-names>A</given-names>
						</name>
						<name>
							<surname>Wiegand</surname>
							<given-names>T</given-names>
						</name>
					</person-group>
					<article-title>Artificial intelligence in dental research Checklist for authors, reviewers, readers</article-title>
					<source>J Dent</source>
					<year>2021</year>
					<volume>107</volume>
					<fpage>103610</fpage>
					<lpage>103610</lpage>
					<pub-id pub-id-type="doi">10.1016/j.jdent.2021.103610</pub-id>
				</element-citation>
			</ref>
			<ref id="B13">
				<label>13</label>
				<mixed-citation>13. Cruz RS, Liu X, Chan AW, Denniston AK, Calvert MJ, Darzi A, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020; 26(9): 1351-63. Doi: 10.1136/bmj.m3210 PMid:32907797</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Cruz</surname>
							<given-names>RS</given-names>
						</name>
						<name>
							<surname>Liu</surname>
							<given-names>X</given-names>
						</name>
						<name>
							<surname>Chan</surname>
							<given-names>AW</given-names>
						</name>
						<name>
							<surname>Denniston</surname>
							<given-names>AK</given-names>
						</name>
						<name>
							<surname>Calvert</surname>
							<given-names>MJ</given-names>
						</name>
						<name>
							<surname>Darzi</surname>
							<given-names>A</given-names>
						</name>
					</person-group>
					<article-title>Guidelines for clinical trial protocols for interventions involving artificial intelligence the SPIRIT-AI extension</article-title>
					<source>Nat Med</source>
					<year>2020</year>
					<volume>26</volume>
					<issue>9</issue>
					<fpage>1351</fpage>
					<lpage>1363</lpage>
					<pub-id pub-id-type="doi">10.1136/bmj.m3210</pub-id>
				</element-citation>
			</ref>
			<ref id="B14">
				<label>14</label>
				<mixed-citation>14. Liu X, Rivera SC, Moher D, Calvert MJ, Denniston AK. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension. BMJ. 2020; 370: m3164. Doi: 10.1136/bmj.m3164</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Liu</surname>
							<given-names>X</given-names>
						</name>
						<name>
							<surname>Rivera</surname>
							<given-names>SC</given-names>
						</name>
						<name>
							<surname>Moher</surname>
							<given-names>D</given-names>
						</name>
						<name>
							<surname>Calvert</surname>
							<given-names>MJ</given-names>
						</name>
						<name>
							<surname>Denniston</surname>
							<given-names>AK</given-names>
						</name>
					</person-group>
					<article-title>Reporting guidelines for clinical trial reports for interventions involving artificial intelligence the CONSORT-AI Extension</article-title>
					<source>BMJ</source>
					<year>2020</year>
					<volume>370</volume>
					<fpage>m3164</fpage>
					<lpage>m3164</lpage>
					<pub-id pub-id-type="doi">10.1136/bmj.m3164</pub-id>
				</element-citation>
			</ref>
			<ref id="B15">
				<label>15</label>
				<mixed-citation>15. Hernandez-Boussard T, Bozkurt S, Ioannidis JPA, Shah NH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc. 2020; 27(12): 2011. Doi: 10.1093/jamia/ocaa088 PMid:32594179</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Hernandez-Boussard</surname>
							<given-names>T</given-names>
						</name>
						<name>
							<surname>Bozkurt</surname>
							<given-names>S</given-names>
						</name>
						<name>
							<surname>Ioannidis</surname>
							<given-names>JPA</given-names>
						</name>
						<name>
							<surname>Shah</surname>
							<given-names>NH</given-names>
						</name>
					</person-group>
					<article-title>MINIMAR (MINimum Information for Medical AI Reporting) Developing reporting standards for artificial intelligence in health care</article-title>
					<source>J Am Med Inform Assoc</source>
					<year>2020</year>
					<volume>27</volume>
					<issue>12</issue>
					<fpage>2011</fpage>
					<lpage>2011</lpage>
					<pub-id pub-id-type="doi">10.1093/jamia/ocaa088</pub-id>
				</element-citation>
			</ref>
			<ref id="B16">
				<label>16</label>
				<mixed-citation>16. Olczak J, Pavlopoulos J, Prijs J, Ijpma FFA, Doornberg JN, Lundström C, et al. Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders: an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal. Acta Orthop. 2021; 92(5): 513. Doi: 10.1080/17453674.2021.1918389</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Olczak</surname>
							<given-names>J</given-names>
						</name>
						<name>
							<surname>Pavlopoulos</surname>
							<given-names>J</given-names>
						</name>
						<name>
							<surname>Prijs</surname>
							<given-names>J</given-names>
						</name>
						<name>
							<surname>Ijpma</surname>
							<given-names>FFA</given-names>
						</name>
						<name>
							<surname>Doornberg</surname>
							<given-names>JN</given-names>
						</name>
						<name>
							<surname>Lundström</surname>
							<given-names>C</given-names>
						</name>
					</person-group>
					<article-title>Presenting artificial intelligence, deep learning, and machine learning studies to clinicians and healthcare stakeholders an introductory reference with a guideline and a Clinical AI Research (CAIR) checklist proposal</article-title>
					<source>Acta Orthop</source>
					<year>2021</year>
					<volume>92</volume>
					<issue>5</issue>
					<fpage>513</fpage>
					<lpage>513</lpage>
					<pub-id pub-id-type="doi">10.1080/17453674.2021.1918389</pub-id>
				</element-citation>
			</ref>
			<ref id="B17">
				<label>17</label>
				<mixed-citation>17. Kocak B, Baessler B, Bakas S, Cuocolo R, Fedorov A, Maier-Hein L, et al. CheckList for evaluation of radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging. 2023; 14(1): 20. Doi: 10.1186/s13244-023-01415-8</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Kocak</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Baessler</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Bakas</surname>
							<given-names>S</given-names>
						</name>
						<name>
							<surname>Cuocolo</surname>
							<given-names>R</given-names>
						</name>
						<name>
							<surname>Fedorov</surname>
							<given-names>A</given-names>
						</name>
						<name>
							<surname>Maier-Hein</surname>
							<given-names>L</given-names>
						</name>
					</person-group>
					<article-title>CheckList for evaluation of radiomics research (CLEAR) a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII</article-title>
					<source>Insights Imaging</source>
					<year>2023</year>
					<volume>14</volume>
					<issue>1</issue>
					<fpage>20</fpage>
					<lpage>20</lpage>
					<pub-id pub-id-type="doi">10.1186/s13244-023-01415-8</pub-id>
				</element-citation>
			</ref>
			<ref id="B18">
				<label>18</label>
				<mixed-citation>18. Stevens LM, Mortazavi BJ, Deo RC, Curtis L, Kao DP. Recommendations for reporting machine learning analyses in clinical research. Circ Cardiovasc Qual Outcomes. 2020; 13(10): e006556. Doi: 10.1161/CIRCOUTCOMES.120.006556</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Stevens</surname>
							<given-names>LM</given-names>
						</name>
						<name>
							<surname>Mortazavi</surname>
							<given-names>BJ</given-names>
						</name>
						<name>
							<surname>Deo</surname>
							<given-names>RC</given-names>
						</name>
						<name>
							<surname>Curtis</surname>
							<given-names>L</given-names>
						</name>
						<name>
							<surname>Kao</surname>
							<given-names>DP</given-names>
						</name>
					</person-group>
					<article-title>Recommendations for reporting machine learning analyses in clinical research</article-title>
					<source>Circ Cardiovasc Qual Outcomes</source>
					<year>2020</year>
					<volume>13</volume>
					<issue>10</issue>
					<elocation-id>e006556</elocation-id>
					<pub-id pub-id-type="doi">10.1161/CIRCOUTCOMES.120.006556</pub-id>
				</element-citation>
			</ref>
			<ref id="B19">
				<label>19</label>
				<mixed-citation>19. Mongan J, Moy L, Kahn CE. Checklist for artificial intelligence in medical imaging (CLAIM): A guide for authors and reviewers. Radiol Artif Intell. 2020; 2(2): e200029. Doi: 10.1148/ryai.2020200029</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Mongan</surname>
							<given-names>J</given-names>
						</name>
						<name>
							<surname>Moy</surname>
							<given-names>L</given-names>
						</name>
						<name>
							<surname>Kahn</surname>
							<given-names>CE</given-names>
						</name>
					</person-group>
					<article-title>Checklist for artificial intelligence in medical imaging (CLAIM) A guide for authors and reviewers</article-title>
					<source>Radiol Artif Intell</source>
					<year>2020</year>
					<volume>2</volume>
					<issue>2</issue>
					<elocation-id>e200029</elocation-id>
					<pub-id pub-id-type="doi">10.1148/ryai.2020200029</pub-id>
				</element-citation>
			</ref>
			<ref id="B20">
				<label>20</label>
				<mixed-citation>20. Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, et al. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ. 2022; 377: e070904. Doi: 10.1136/bmj-2022-070904</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Vasey</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Nagendran</surname>
							<given-names>M</given-names>
						</name>
						<name>
							<surname>Campbell</surname>
							<given-names>B</given-names>
						</name>
						<name>
							<surname>Clifton</surname>
							<given-names>DA</given-names>
						</name>
						<name>
							<surname>Collins</surname>
							<given-names>GS</given-names>
						</name>
						<name>
							<surname>Denaxas</surname>
							<given-names>S</given-names>
						</name>
					</person-group>
					<article-title>Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence DECIDE-AI</article-title>
					<source>BMJ</source>
					<year>2022</year>
					<volume>377</volume>
					<elocation-id>e070904</elocation-id>
					<pub-id pub-id-type="doi">10.1136/bmj-2022-070904</pub-id>
				</element-citation>
			</ref>
			<ref id="B21">
				<label>21</label>
				<mixed-citation>21. Kwong JCC, McLoughlin LC, Haider M, Goldenberg MG, Erdman L, Rickard M, et al. Standardized reporting of machine learning applications in urology: The STREAM-URO framework. Eur Urol Focus. 2021; 7(4): 672-82. Doi: 10.1016/j.euf.2021.07.004.</mixed-citation>
				<element-citation publication-type="journal">
					<person-group person-group-type="author">
						<name>
							<surname>Kwong</surname>
							<given-names>JCC</given-names>
						</name>
						<name>
							<surname>McLoughlin</surname>
							<given-names>LC</given-names>
						</name>
						<name>
							<surname>Haider</surname>
							<given-names>M</given-names>
						</name>
						<name>
							<surname>Goldenberg</surname>
							<given-names>MG</given-names>
						</name>
						<name>
							<surname>Erdman</surname>
							<given-names>L</given-names>
						</name>
						<name>
							<surname>Rickard</surname>
							<given-names>M</given-names>
						</name>
					</person-group>
					<article-title>Standardized reporting of machine learning applications in urology The STREAM-URO framework</article-title>
					<source>Eur Urol Focus</source>
					<year>2021</year>
					<volume>7</volume>
					<issue>4</issue>
					<fpage>672</fpage>
					<lpage>682</lpage>
					<pub-id pub-id-type="doi">10.1016/j.euf.2021.07.004</pub-id>
				</element-citation>
			</ref>
		</ref-list>
	</back>
	<sub-article article-type="translation" id="s1" xml:lang="es">
		<front-stub>
			<article-categories>
				<subj-group subj-group-type="heading">
					<subject>Editorial</subject>
				</subj-group>
			</article-categories>
			<title-group>
				<article-title>Inteligencia humana para autores, revisores y editores que utilicen inteligencia artificial</article-title>
			</title-group>
			<author-notes>
				<corresp id="c2">
					<label> Autor de correspondencia:</label>
					<label>Mauricio </label>Palacios Gomez . e.mail: <email>mao.palacios@correounivalle.edu.co</email>
				</corresp>
			</author-notes>
		</front-stub>
		<body>
			<p>Le llamamos inteligencia artificial a cualquier máquina que procese información con algún propósito, cumpliendo las reglas lógicas de la computación de <italic>Turing</italic> descritas hace más de 70 años <xref ref-type="bibr" rid="B1"><sup>1</sup></xref>. Estas máquinas funcionan con instrucciones llamadas algoritmos, que son una secuencia finita y bien definida de procesamiento de información que se implementan mediante autómatas (computadoras) o cualquier tecnología digital con el propósito de optimizar un proceso <xref ref-type="bibr" rid="B2"><sup>2</sup></xref>. Esto quiere decir que el fin de la inteligencia artificial es la optimización.</p>
			<p>La optimización es la capacidad de hacer o resolver alguna cosa de la manera más eficiente posible y, en el mejor de los casos, utilizando la menor cantidad de recursos. La optimización que se pretende obtener es programada y preestablecida por humanos; por lo tanto, estas tecnologías son herramientas creadas por humanos para propósitos humanos <xref ref-type="bibr" rid="B3"><sup>3</sup></xref>.</p>
			<p>La capacidad de optimización de la inteligencia artificial es asombrosa. Se estima que el uso de la inteligencia artificial facilitará alcanzar 134, de las 169 metas acordadas en la Agenda 2030 para el Desarrollo Sostenible <xref ref-type="bibr" rid="B4"><sup>4</sup></xref>. Sin embargo, en esta evaluación se proyectó que podría afectar negativamente el avance de 59 metas del mismo acuerdo; siendo, la desigualdad social, económica, educativa, legal y de género, el fenómeno que más se afecta por la inteligencia artificial.</p>
			<p>Esta proyección nos muestra que es necesario un contrapeso al desarrollo y la implementación de procesos mediados con inteligencia artificial, que mantenga la reflexión y cuestione la influencia de estas herramientas tecnológicas, y, sobre todo, que esté basado en inteligencia humana. Una definición de inteligencia humana, en el entorno de la ciencia de datos e inteligencia artificial, sería como una colección de conocimientos tácitos contextuales sobre los valores humanos, la responsabilidad, la empatía, la intuición o el cuidado de otro ser vivo que no pueden describirse ni ejecutarse fácilmente mediante algoritmos <xref ref-type="bibr" rid="B5"><sup>5</sup></xref>. </p>
			<p>Mejorar las capacidades de atención de los sistemas de salud, tener diagnósticos con mayor exactitud, lograr la optimización de los tratamientos médicos y la generación de medidas de salud pública más eficientes y adecuadas, son las promesas de los avances de la Inteligencia artificial. La Organización Mundial de la Salud reconoce esas expectativas, pero advierte la necesidad de garantizar la transparencia, la explicación y la comprensión de cada aplicación basada en inteligencia artificial implementada a la salud, con evaluación permanente, que asegure la equidad y la inclusión, y que sea sostenible <xref ref-type="bibr" rid="B6"><sup>6</sup></xref>.</p>
			<p>Para las revistas científicas del área de la salud la inteligencia artificial ya hace parte de las investigaciones que sustentan los manuscritos sometidos al proceso editorial; y afortunadamente, contamos con guías para que los autores presenten sus manuscritos de forma completa; estas permiten que la evaluación de los pares, y el juicio de los editores puedan decidir mejor su publicación. Hasta ahora, la página web de <italic>Equator Network</italic> ha publicado doce pautas para los manuscritos de investigaciones basadas en inteligencia artificial; y en todas ellas, la preocupación por la trasparencia acerca de la población de la cual se adquirieron los datos, el diseño y el desarrollo del algoritmo, la capacitación del modelo; y la validez externa de los procesos optimizados están presentes (<xref ref-type="table" rid="t2">Tabla 1</xref>). </p>
			<p>
				<table-wrap id="t2">
					<label>Tabla 1</label>
					<caption>
						<title>Pautas para los manuscritos de investigaciones basados en IA publicados en <italic>Equator Network</italic></title>
					</caption>
					<table>
						<colgroup>
							<col/>
							<col/>
							<col/>
						</colgroup>
						<tbody>
							<tr>
								<td align="center">Guía</td>
								<td align="center">Nombre</td>
								<td align="center">Año</td>
							</tr>
							<tr>
								<td align="left">PRIME</td>
								<td align="left">Aprendizaje automático relacionado con las evaluaciones de imágenes cardiovasculares</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B10"><sup>10</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">MI-CLAIM</td>
								<td align="left">Modelos clínicos de inteligencia artificial</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B11"><sup>11</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left"> </td>
								<td align="left">La inteligencia artificial en la investigación odontológica</td>
								<td align="left">2021 <xref ref-type="bibr" rid="B12"><sup>12</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">SPIRIT-AI</td>
								<td align="left">Directrices sobre protocolos de ensayos clínicos para intervenciones con inteligencia artificial</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B13"><sup>13</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">CONSORT-AI</td>
								<td align="left">Directrices para la elaboración de informes de ensayos clínicos sobre intervenciones con inteligencia artificial</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B14"><sup>14</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">MINIMAR</td>
								<td align="left">Normas de información para la inteligencia artificial en la atención sanitaria</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B15"><sup>15</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">CAIR</td>
								<td align="left">Directriz de investigación clínica sobre inteligencia artificial</td>
								<td align="left">2021 <xref ref-type="bibr" rid="B16"><sup>16</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">CLEAR</td>
								<td align="left">Evaluación de la investigación radiómica</td>
								<td align="left">2023 <xref ref-type="bibr" rid="B17"><sup>17</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left"> </td>
								<td align="left">Informes de análisis de aprendizaje automático en investigación clínica</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B18"><sup>18</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">CLAIM</td>
								<td align="left">Lista de comprobación para la inteligencia artificial en el tratamiento de imágenes médicas</td>
								<td align="left">2020 <xref ref-type="bibr" rid="B19"><sup>19</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">DECIDE-AI</td>
								<td align="left">Guía para la evaluación clínica inicial de sistemas de apoyo a la toma de decisiones basados en inteligencia artificial</td>
								<td align="left">2022 <xref ref-type="bibr" rid="B20"><sup>20</sup></xref>
								</td>
							</tr>
							<tr>
								<td align="left">STREAM-URO</td>
								<td align="left">Informes sobre aplicaciones de aprendizaje automático en urología</td>
								<td align="left">2021 <xref ref-type="bibr" rid="B21"><sup>21</sup></xref>
								</td>
							</tr>
						</tbody>
					</table>
				</table-wrap>
			</p>
			<p>Sin embargo, la escritura y el proceso editorial no cuentan con las mismas guías. Los autores, pares evaluadores y editores se sorprenden con los algoritmos que prometen eficiencia en su labor. Esa fascinación nos lleva al riesgo de una confianza absoluta en la inteligencia artificial que se le conoce como: algoritmocracia, es decir, un gobierno donde los humanos y las máquinas obedecen a los algoritmos <xref ref-type="bibr" rid="B2"><sup>2</sup></xref>. </p>
			<p>Tenemos señales de que los algoritmos no son ideales en la edición científica. Llevamos años cuestionando el uso de los algoritmos con los cuales los índices bibliométricos clasifican (¿o descalifican?) las revistas científicas; pero, aceptamos que los entes supervisores de la investigación los consideran el patrón de oro para medir la productividad científica. Los autores acuden a herramientas de escritura de inteligencia artificial con frecuencia, como <italic>ChatGPT</italic>, <italic>Bard</italic> y <italic>Bing</italic>, con poca reflexión acerca delas limitaciones y que pueden generar errores fácticos y de razonamiento en la escritura científica <xref ref-type="bibr" rid="B7"><sup>7</sup></xref>. Los editores pueden erróneamente aceptar porcentaje de similitud que emiten los algoritmos anti-plagio como regla en la evaluación de originalidad de un manuscrito, reemplazando completamente el juicio de experto. Siempre que se acuda a la optimización mediante inteligencia artificial se debe recordar que la tecnología no cambia la sociedad, es la inteligencia humana quien define la creación de aplicaciones, el uso y cómo afecta a la sociedad. Lo contrario es aceptar las tesis del determinismo tecnológico, y aunque no nos va a conducir a un futuro apocalíptico como el que propone Skynet de la saga <italic>Terminator</italic>, sí afectará la igualdad, la verdad y la originalidad de la ciencia <xref ref-type="bibr" rid="B8"><sup>8</sup></xref>.</p>
			<p>La pauta editorial de la Revista Colombia Médica acepta el uso de la inteligencia artificial en las investigaciones y la adhesión de los autores a las guías de publicación de investigaciones basadas en inteligencia artificial disponibles en la página web de <italic>Equator Network</italic> serán norma para la revista. </p>
			<p>Adicionalmente, Colombia Médica, como miembro del ICMJE (Comité Internacional de Editores de Revistas Médicas) y la WAME (Asociación Mundial de Editores Médicos), acoge sus recomendaciones acerca de la definición de autoría y el uso de programas de inteligencia artificial para la elaboración y revisión de manuscritos sometidos a la revista <xref ref-type="bibr" rid="B9"><sup>9</sup></xref>. Estas recomendaciones, que son explicadas en un artículo reproducido de la WAME, son: </p>
			<p>
				<list list-type="bullet">
					<list-item>
						<p>No se aceptan autores no humanos.</p>
					</list-item>
					<list-item>
						<p>Los autores deben ser transparentes cuando utilizan <italic>chatbots</italic> y deben proporcionar información sobre cómo se utilizaron.</p>
					</list-item>
					<list-item>
						<p>Los autores son responsables de la información producida con un <italic>chatbot</italic> en su artículo (incluida la exactitud y la ausencia de plagio) y de la atribución adecuada de todas las fuentes.</p>
					</list-item>
					<list-item>
						<p>Los revisores y editores deben advertir a los autores si utilizaron <italic>chatbots</italic> en la evaluación del manuscrito y la generación de las revisiones y la correspondencia. También, deben explicar cómo los utilizaron.</p>
					</list-item>
					<list-item>
						<p>Los editores necesitan herramientas adecuadas que les ayuden a detectar contenido generado o alterado por la Inteligencia Artificial por el bien de la ciencia y del público, y para ayudar a garantizar la integridad de la información sanitaria y reducir el riesgo de resultados adversos para la salud.</p>
					</list-item>
				</list>
			</p>
			<p>Colofón: ¿Si la inteligencia artificial optimiza nuestro trabajo, por qué tenemos menos tiempo libre?</p>
		</body>
	</sub-article>
</article>