<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">communicology</journal-id><journal-title-group><journal-title xml:lang="ru">Коммуникология</journal-title><trans-title-group xml:lang="en"><trans-title>Communicology</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2311-3065</issn><issn pub-type="epub">2311-3332</issn><publisher><publisher-name>МАК</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.21453/2311-3065-2024-12-3-43-60</article-id><article-id custom-type="elpub" pub-id-type="custom">communicology-452</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>МЕДИАКОММУНИКАЦИИ И ЖУРНАЛИСТИКА (ФИЛОЛОГИЧЕСКИЕ НАУКИ)</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>MEDIACOMMUNICATIONS AND JOURNALISM (PHILOLOGICAL SCIENCES)MEDIACOMMUNICATIONS AND JOURNALISM (PHILOLOGICAL SCIENCES)</subject></subj-group></article-categories><title-group><article-title>Искусственный интеллект и масс-медиа: негативные аспекты алгоритмов персонализации контента</article-title><trans-title-group xml:lang="en"><trans-title>Artificial Intelligence and mass media: negative aspects of content personalization algorithms</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0009-0005-2033-3149</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Тихонюк</surname><given-names>А. А.</given-names></name><name name-style="western" xml:lang="en"><surname>Tikhoniuk</surname><given-names>A. A.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Тихонюк Анастасия Александровна – шеф-редактор Дирекции развития цифровой среды</p><p>125040, г. Москва, ул. 5-я Ямского Поля, 19-21/1</p></bio><bio xml:lang="en"><p>Tikhoniuk Anastasiya Aleksandrovna – chief editor at the digital department</p><p>125040, Moscow, 5th Yamskogo Polya str., 19-21/1</p></bio><email xlink:type="simple">savanastasy@yandex.ru</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Всероссийская государственная телевизионная и радиовещательная компания (ВГТРК)<country>Россия</country></aff><aff xml:lang="en">All-Russia State Television and Radio Broadcasting Company (VGTRK)<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2024</year></pub-date><pub-date pub-type="epub"><day>05</day><month>10</month><year>2024</year></pub-date><volume>12</volume><issue>3</issue><fpage>43</fpage><lpage>60</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Тихонюк А.А., 2024</copyright-statement><copyright-year>2024</copyright-year><copyright-holder xml:lang="ru">Тихонюк А.А.</copyright-holder><copyright-holder xml:lang="en">Tikhoniuk A.A.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://www.communicology.ru/jour/article/view/452">https://www.communicology.ru/jour/article/view/452</self-uri><abstract><p>Развитие технологий искусственного интеллекта и алгоритмов машинного обучения оказывает все большее влияние на сферы жизни общества, постепенно находя свое место не только в социальных медиа, но и в журналистике (Newman). Их активно внедряют в различные области масс-медиа, что позволяет автоматизировать ряд процессов медиакомпаний, оптимизируя работу журналистов, редакторов и медиаменеджеров. Данная тема представляет собой актуальную проблему в современном информационном обществе (Túñez-López et al.). Искусственный интеллект и процесс его обучения стали неотъемлемой частью процессов создания, анализа и распространения контента, привнося новые возможности, но вместе с тем и серьезные вызовы. Например, алгоритмы персонализации позволяют адаптировать информацию к индивидуальным интересам и предпочтениям каждого пользователя, повышая его вовлеченность и удовлетворенность контентом. Таким образом социальные сети и многие другие интернет-платформы персонализированы для каждого пользователя на основе их демографических профилей и личных данных. В данной статье представлен обзор текущих научных данных о потенциальных рисках использования алгоритмов персонализации контента в масс-медиа. Результаты и выводы статьи могут помочь глубже понять природу этих рисков и сопряженные с ними вызовы для сферы массовой коммуникации.</p></abstract><trans-abstract xml:lang="en"><p>The development of artificial intelligence (AI) technologies and machine learning algorithms is increasingly influencing various aspects of social life, gradually finding its place not only in social media but also in journalism (Newman). They are actively being integrated into various fields of mass media, enabling the automation of several processes within media companies, thereby optimizing the work of journalists, editors, and media managers. This topic represents a pertinent issue in the modern information society (Túñez López et al.). AI and its machine learning capabilities have become integral parts of the processes of content creation, analysis, and distribution, bringing new opportunities along with significant challenges. For instance, personalization algorithms allow for the adaptation of information to the individual interests and preferences of each user, increasing their engagement and satisfaction with the content. Thus, social networks and many other internet platforms are personalized for each user based on their demographic profiles and personal data. This article provides an overview of current scientific data on the potential risks associated with the use of content personalization algorithms in mass media. The results and conclusions of the article will help to better understand the nature of these risks and the associated challenges for the field of mass communication.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>искусственный интеллект</kwd><kwd>ИИ</kwd><kwd>персонализация контента</kwd><kwd>масс-медиа</kwd><kwd>СМИ</kwd><kwd>социальные медиа</kwd><kwd>алгоритмическая персонализация</kwd></kwd-group><kwd-group xml:lang="en"><kwd>artificial intelligence</kwd><kwd>AI</kwd><kwd>content personalization</kwd><kwd>mass media</kwd><kwd>media</kwd><kwd>social media</kwd><kwd>algorithmic personalization</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Володенков С.В. (2021). Интернет-коммуникации в глобальном пространстве современного политического управления: навстречу цифровому обществу. М.: Проспект.</mixed-citation><mixed-citation xml:lang="en">Amoore L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Грушевская В.Ю. (2022). Модель фильтрации информации в социальных медиа // Журнал исследования социальной политики. №3. С. 393-406.</mixed-citation><mixed-citation xml:lang="en">Aridor G., Goncalves D., Sikdar S. (2020). Deconstructing the Filter Bubble: User Decision-Making and Recommender Systems. In: RecSys’20: Fourteenth ACM Conference on Recommender Systems, Brazil, September 22-26. P. 82-91.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Давыдов С.Г., Замков А.В., Крашенинникова М.А., Лукина М.М. (2023). Использование технологий искусственного интеллекта в россий ских медиа и журналистике // Вестн. Моск. ун-та. Сер. 10: Журналистика. № 5. С. 3-21.</mixed-citation><mixed-citation xml:lang="en">Bakshy E., Messing S., Adamic L. (2015). Exposure to Ideologically Diverse News and Opinion on Facebook. Science. Vol. 348. No. 6239. P. 1130-1132.</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Ефанов А.А., Юдина Е.Н. (2021). Медиаэффекты в современном неоинформационном обществе. Коммуникология. Том 9. № 4. С. 136-147.</mixed-citation><mixed-citation xml:lang="en">Barnidge M. (2017). Exposure to Political Disagreement in Social Media Versus Face-to-Face and Anonymous Online Settings. Political Communication. Vol. 34. No. 2: P. 302-321.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Карпова А.Ю. (2014). Информационная аномия: выбор на грани фола // Власть. № 1. С. 41-45.</mixed-citation><mixed-citation xml:lang="en">Bastian M., Makhortykh M., Dobber T. (2019). News personalization for peace: how algorithmic recommendations can impact conflict coverage. International Journal of Conflict Management. Vol. 30. No. 3. P. 309-328.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Кириллина Н.В. (2022). Фрагментация аудитории медиа: от глобальной деревни к глобальному театру. Том 10. № 2. С. 170-179. DOI: 10.21453/2311-3065-2022-10-2-170-179.</mixed-citation><mixed-citation xml:lang="en">Bigman Y.E., Wilson D., Arnestad M. N., Waytz A., Gray K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General. No. 152 (1). P. 4-27.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Кириллина Н.В. (2021). О роли пользователя и фрагментации сети // Коммуникология. Том 9. № 2. С. 41-49. DOI: 10.21453/2311-3065-2021-9-2-41-49.</mixed-citation><mixed-citation xml:lang="en">Bozdag E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology. Vol.15. No. 3. P. 209-227.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Мартыненко Т. С., Добринская Д. Е. (2021). Социальное неравенство в эпоху искусственного интеллекта: от цифрового к алгоритмическому разрыву// Мониторинг. №1. С. 171-192.</mixed-citation><mixed-citation xml:lang="en">Brady W.J., Jackson J.C., Lindström B., Crockett M.J. (2023). Algorithm-mediated social learning in online social networks. Trends in Cognitive Sciences. Vol. 27. No. 10. P. 947-960.</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Суходолов А.П., Бычкова А.М., Ованесян С.С. (2019). Журналистика с искусственным интеллектом // Вопросы теории и практики журналистики. № 4. С. 647-667.</mixed-citation><mixed-citation xml:lang="en">Chaney A.J., Stewart B.M., Engelhardt B.E. (2017). How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In: Proceedings of the 12th</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Толокнев К.А. (2022). Невидимый политрук: как алгоритмы персонализации формируют общественное мнение // Полития. № 4 (107). С. 63-82.</mixed-citation><mixed-citation xml:lang="en">ACM Conference on Recommender Systems. Vancouver. P. 224Chen Y.-S., Zaman T. (2024). Shaping opinions in social networks with shadow banning. PLoS ONE. Vol. 19. No. 3. P. 1-30.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Шарков Ф.И., Силкин В.В. (2021). Генезис социологии медиапространства // Вестник Российского университета дружбы народов. Серия: Социология. Т. 21. № 3. С. 557-566. DOI: 10.22363/2313-2272-2021-21-3-557-566.</mixed-citation><mixed-citation xml:lang="en">Davydov S.G., Zamkov A.V., Krasheninnikova M.A., Lukina M.M. (2023). Use of artificial intelligence technologies in Russian media and journalism. Vestn. Of Moscow University. Series 10: Journalism. No. 5. P. 3-21 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Amoore L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.</mixed-citation><mixed-citation xml:lang="en">Delmonaco D., Mayworm S., Thach H., Guberman J. (2024). What are you doing, TikTok?: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning. In: Proc. ACM on Human-Computer Interaction. Vol. 8. Article 154 (April 2024). DOI: 10.1145/3637431.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Aridor G., Goncalves D., Sikdar S. (2020). Deconstructing the Filter Bubble: User Decision-Making and Recommender Systems. In: RecSys'20: Fourteenth ACM Conference on Recommender Systems, Brazil, September 22-26. P. 82-91.</mixed-citation><mixed-citation xml:lang="en">Efanov A.A., Yudina E.N. (2021). Media effects in a modern neo-information society. Communicology. Vol. 9. No. 4. P. 136-147 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Bakshy E., Messing S., Adamic L. (2015). Exposure to Ideologically Diverse News and Opinion on Facebook. Science. Vol. 348. No. 6239. P. 1130-1132.</mixed-citation><mixed-citation xml:lang="en">Eg R., Tønnesen Ö., Tennfjord M. (2023). A scoping review of personalized user experiences on social media: The interplay between algorithms and human factors. Computers in Human Behavior Reports. Vol. 9. No. 3. P. 100253.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Barnidge M. (2017). Exposure to Political Disagreement in Social Media Versus Face-to-Face and Anonymous Online Settings. Political Communication. Vol. 34. No. 2: P. 302-321.</mixed-citation><mixed-citation xml:lang="en">Flaxman S., Goel S., Rao J.M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly. Vol. 80. No. 1. P. 298-320.</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Bastian M., Makhortykh M., Dobber T. (2019). News personalization for peace: how algorithmic recommendations can impact conflict coverage. International Journal of Conflict Management. Vol. 30. No. 3. P. 309-328.</mixed-citation><mixed-citation xml:lang="en">Gao Y., Liu H. (2022). Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. Journal of Research in Interactive Marketing. Vol. 17. No. 1. P. 1-18.</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Bigman Y.E., Wilson D., Arnestad M. N., Waytz A., Gray K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General. No. 152 (1). P. 4-27.</mixed-citation><mixed-citation xml:lang="en">Gentsch Pr. (2019). AI in Marketing, Sales and Service: How Marketers without a Data Science Degree can use AI, Big Data and Bots.</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">Bozdag E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology. Vol.15. No. 3. P. 209-227.</mixed-citation><mixed-citation xml:lang="en">Geschke D., Lorenz J., Holtz P. (2019). The Triple-Filter Bubble: Using Agent-Based Modeling to Test a Meta-Theoretical Framework for the Emergence of Filter Bubbles and Echo Chambers. British Journal of Social Psychology. Vol. 58. No. 1. P. 129-149.</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Brady W.J., Jackson J.C., Lindström B., Crockett M.J. (2023). Algorithm-mediated social learning in online social networks. Trends in Cognitive Sciences. Vol. 27. No. 10. P. 947-960.</mixed-citation><mixed-citation xml:lang="en">Gillespie T., Boczkowski P. J., Foot K.A. (eds.) (2014). The Relevance of Algorithms. Media Technologies: Essays on Communication, Materiality, and Society. Cambridge, MA: MIT Press.</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">Chaney A.J., Stewart B.M., Engelhardt B.E. (2017). How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In: Proceedings of the 12th ACM Conference on Recommender Systems. Vancouver. P. 224-232.</mixed-citation><mixed-citation xml:lang="en">Gran A.B., Booth P., Bucher T. (2020). To be or not to be algorithm aware: a question of a new digital divide? Information, Communication &amp; Society. Vol. 24. No. 03. P. 1-18.</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Chen Y.-S., Zaman T. (2024). Shaping opinions in social networks with shadow banning. PLoS ONE. Vol. 19. No. 3. P. 1-30.</mixed-citation><mixed-citation xml:lang="en">Grushevskaya V.Yu. (2022). Model of information filtration in social media. Journal of Social Policy Research. No. 3. P. 393-406 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">Delmonaco D., Mayworm S., Thach H., Guberman J. (2024). What are you doing, TikTok?: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning. In: Proc. ACM on Human-Computer Interaction. Vol. 8. Article 154 (April 2024). DOI: 10.1145/3637431.</mixed-citation><mixed-citation xml:lang="en">Hagendorff T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds &amp; Machines. Vol. 30. No. 03. P. 99-120.</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Eg R., Tønnesen Ö., Tennfjord M. (2023). A scoping review of personalized user experiences on social media: The interplay between algorithms and human factors. Computers in Human Behavior Reports. Vol. 9. No. 3. P. 100253.</mixed-citation><mixed-citation xml:lang="en">Hargittai E., Micheli M. (2019). Internet Skills and Why They Matter. In: Graham M., Dutton W. H. (eds.) Society and the Internet: How Networks of Information and Communication Are Changing Our Lives. Oxford: Oxford University Press. P. 109-124.</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">Flaxman S., Goel S., Rao J.M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly. Vol. 80. No. 1. P. 298-320.</mixed-citation><mixed-citation xml:lang="en">Hassan R. (2020) The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life. London: University of Westminster Press.</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">Gao Y., Liu H. (2022). Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. Journal of Research in Interactive Marketing. Vol. 17. No. 1. P. 1-18.</mixed-citation><mixed-citation xml:lang="en">Helberger N. (2016). Policy implications from algorithmic profiling and the changing relationship between newsreaders and the media. Javnost – The Public. Vol. 23. No. 2. P. 188-203.</mixed-citation></citation-alternatives></ref><ref id="cit26"><label>26</label><citation-alternatives><mixed-citation xml:lang="ru">Gentsch Pr. (2019). AI in Marketing, Sales and Service: How Marketers without a Data Science Degree can use AI, Big Data and Bots.</mixed-citation><mixed-citation xml:lang="en">Helberger N. (2019). On the Democratic Role of News Recommenders. Digital Journalism. Vol. 7. No. 4. P. 1-20.</mixed-citation></citation-alternatives></ref><ref id="cit27"><label>27</label><citation-alternatives><mixed-citation xml:lang="ru">Geschke D., Lorenz J., Holtz P. (2019). The Triple-Filter Bubble: Using Agent-Based Modeling to Test a Meta-Theoretical Framework for the Emergence of Filter Bubbles and Echo Chambers. British Journal of Social Psychology. Vol. 58. No. 1. P. 129-149.</mixed-citation><mixed-citation xml:lang="en">Hosanagar K., Fleder D., Lee D., Buja A. (2014). Will the Global Village Fracture Into Tribes? Recommender Systems and Their Effects on Consumer Fragmentation. Management Science. Vol. 60. No. 4, P. 805-823.</mixed-citation></citation-alternatives></ref><ref id="cit28"><label>28</label><citation-alternatives><mixed-citation xml:lang="ru">Gillespie T., Boczkowski P. J., Foot K.A. (eds.) (2014). The Relevance of Algorithms. Media Technologies: Essays on Communication, Materiality, and Society. Cambridge, MA: MIT Press.</mixed-citation><mixed-citation xml:lang="en">Jain S., Sundstrom M. (2021). Toward a conceptualization of personalized services in apparel e-commerce fulfillment. Research Journal of Textile and Apparel. Vol. 25. No. 4. P. 414-430.</mixed-citation></citation-alternatives></ref><ref id="cit29"><label>29</label><citation-alternatives><mixed-citation xml:lang="ru">Gran A.B., Booth P., Bucher T. (2020). To be or not to be algorithm aware: a question of a new digital divide? Information, Communication &amp; Society. Vol. 24. No. 03. P. 1-18.</mixed-citation><mixed-citation xml:lang="en">Joris G., Grove F.D., Van Damme K., De Marez L. (2021). Appreciating News Algorithms: Examining Audiences’ Perceptions to Different News Selection Mechanisms. Digital Journalism. Vol. 9. No. 5, P. 589-618.</mixed-citation></citation-alternatives></ref><ref id="cit30"><label>30</label><citation-alternatives><mixed-citation xml:lang="ru">Hagendorff T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds &amp; Machines. Vol. 30. No. 03. P. 99-120.</mixed-citation><mixed-citation xml:lang="en">Just N., Latzer M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture &amp; Society. Vol. 39 No. 2, P. 238-258.</mixed-citation></citation-alternatives></ref><ref id="cit31"><label>31</label><citation-alternatives><mixed-citation xml:lang="ru">Hargittai E., Micheli M. (2019). Internet Skills and Why They Matter. In: Graham M., Dutton W. H. (eds.) Society and the Internet: How Networks of Information and Communication Are Changing Our Lives. Oxford: Oxford University Press. P. 109-124.</mixed-citation><mixed-citation xml:lang="en">Kant T. (2020). Making it Personal: Algorithmic Personalization, Identity, and Everyday Life. Oxford Academic.</mixed-citation></citation-alternatives></ref><ref id="cit32"><label>32</label><citation-alternatives><mixed-citation xml:lang="ru">Hassan R. (2020) The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life. London: University of Westminster Press.</mixed-citation><mixed-citation xml:lang="en">Karduni A. (2019). Human-Misinformation interaction: Understanding the interdisciplinary approach needed to computationally combat false information. Vol. 1. No. 1. P. 1-21.</mixed-citation></citation-alternatives></ref><ref id="cit33"><label>33</label><citation-alternatives><mixed-citation xml:lang="ru">Helberger N. (2016). Policy implications from algorithmic profiling and the changing relationship between newsreaders and the media. Javnost – The Public. Vol. 23. No. 2. P. 188-203.</mixed-citation><mixed-citation xml:lang="en">Karpova A.Yu. (2014). Information Anomie: Choosing on the Edge of a Foul. Vlast. No. 1. P. 41-45 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit34"><label>34</label><citation-alternatives><mixed-citation xml:lang="ru">Helberger N. (2019). On the Democratic Role of News Recommenders. Digital Journalism. Vol. 7. No. 4. P. 1-20.</mixed-citation><mixed-citation xml:lang="en">Kim T., Barasz K., John L. K. (2018). Why Am I Seeing This Ad? The Effect of Ad Transparency on Ad Effectiveness. Journal of Consumer Research. Vol. 45. No. 5. P. 906-932.</mixed-citation></citation-alternatives></ref><ref id="cit35"><label>35</label><citation-alternatives><mixed-citation xml:lang="ru">Hosanagar K., Fleder D., Lee D., Buja A. (2014). Will the Global Village Fracture Into Tribes? Recommender Systems and Their Effects on Consumer Fragmentation. Management Science. Vol. 60. No. 4, P. 805-823.</mixed-citation><mixed-citation xml:lang="en">Kirillina N.V. (2020). Sources of uncertainty and application of iterative approach in interactive communication campaigns (research and practice). Communicology. Vol. 8. No.4. P. 172-179 (In Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit36"><label>36</label><citation-alternatives><mixed-citation xml:lang="ru">Jain S., Sundstrom M. (2021). Toward a conceptualization of personalized services in apparel e-commerce fulfillment. Research Journal of Textile and Apparel. Vol. 25. No. 4. P. 414-430.</mixed-citation><mixed-citation xml:lang="en">Kirillina N.V. (2021). On user roles and fragmentation of the global network. Communicology. Vol. 9. No.2. P. 41-49. DOI: 10.21453/2311-3065-2020-9 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit37"><label>37</label><citation-alternatives><mixed-citation xml:lang="ru">Joris G., Grove F.D., Van Damme K., De Marez L. (2021). Appreciating News Algorithms: Examining Audiences’ Perceptions to Different News Selection Mechanisms. Digital Journalism. Vol. 9. No. 5, P. 589-618.</mixed-citation><mixed-citation xml:lang="en">Kitchin R. (2017). Thinking critically about and researching algorithms. Information, Communication &amp; Society. Vol. 20. No. 1. P. 14-29.</mixed-citation></citation-alternatives></ref><ref id="cit38"><label>38</label><citation-alternatives><mixed-citation xml:lang="ru">Just N., Latzer M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture &amp; Society. Vol. 39 No. 2, P. 238-258.</mixed-citation><mixed-citation xml:lang="en">Lee M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data &amp; Society. Vol. 5. No. 1. P. 1-16.</mixed-citation></citation-alternatives></ref><ref id="cit39"><label>39</label><citation-alternatives><mixed-citation xml:lang="ru">Kant T. (2020). Making it Personal: Algorithmic Personalization, Identity, and Everyday Life. Oxford Academic.</mixed-citation><mixed-citation xml:lang="en">Martynenko T.S., Dobrinskaya D.E. (2021). Social Inequality in the Era of Artificial Intelligence: From Digital to Algorithmic Gap. Monitoring. No. 1. P. 171-192 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit40"><label>40</label><citation-alternatives><mixed-citation xml:lang="ru">Karduni A. (2019). Human-Misinformation interaction: Understanding the interdisciplinary approach needed to computationally combat false information. Vol. 1. No. 1. P. 1-21.</mixed-citation><mixed-citation xml:lang="en">Masrour F., Wilson T., Yan H., Tan P. N., Esfahanian A. (2020). Bursting the Filter Bubble: Fairness-aware Network Link Prediction. In: Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 01. P. 841-848.</mixed-citation></citation-alternatives></ref><ref id="cit41"><label>41</label><citation-alternatives><mixed-citation xml:lang="ru">Kim T., Barasz K., John L. K. (2018). Why Am I Seeing This Ad? The Effect of Ad Transparency on Ad Effectiveness. Journal of Consumer Research. Vol. 45. No. 5. P. 906-932.</mixed-citation><mixed-citation xml:lang="en">Möller J., Trilling D., Helberger N., van Es B. (2018). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication &amp; Society. Vol. 21. No. 7. P. 959–977.</mixed-citation></citation-alternatives></ref><ref id="cit42"><label>42</label><citation-alternatives><mixed-citation xml:lang="ru">Kitchin R. (2017). Thinking critically about and researching algorithms. Information, Communication &amp; Society. Vol. 20. No. 1. P. 14-29.</mixed-citation><mixed-citation xml:lang="en">Morik M., Singh A., Hong J., Joachims T. (2020). Controlling Fairness and Bias in Dynamic Learning-to-Rank. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20), July 25–30, 2020, Virtual Event, China. ACM, New York, NY, USA.</mixed-citation></citation-alternatives></ref><ref id="cit43"><label>43</label><citation-alternatives><mixed-citation xml:lang="ru">Lee M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data &amp; Society. Vol. 5. No. 1. P. 1-16.</mixed-citation><mixed-citation xml:lang="en">Newman N. (2020). Journalism, media and technology: trends and predictions for 2020. London: Reuters Institute for the Study of Journalism &amp; Oxford University.</mixed-citation></citation-alternatives></ref><ref id="cit44"><label>44</label><citation-alternatives><mixed-citation xml:lang="ru">Masrour F., Wilson T., Yan H., Tan P. N., Esfahanian A. (2020). Bursting the Filter Bubble: Fairness-aware Network Link Prediction. In: Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 01. P. 841-848.</mixed-citation><mixed-citation xml:lang="en">Nguyen T.T., Hui P.M., Harper F.M, Terveen L., Konstan J.A. (2014). Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity. In: Proceedings of the 23rd International Conference on World Wide Web. Association for Computing Machinery. New York. P. 677-686.</mixed-citation></citation-alternatives></ref><ref id="cit45"><label>45</label><citation-alternatives><mixed-citation xml:lang="ru">Möller J., Trilling D., Helberger N., van Es B. (2018). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication &amp; Society. Vol. 21. No. 7. P. 959–977.</mixed-citation><mixed-citation xml:lang="en">Pariser E. (2011). The filter bubble: What the Internet is hiding from you. Penguin.</mixed-citation></citation-alternatives></ref><ref id="cit46"><label>46</label><citation-alternatives><mixed-citation xml:lang="ru">Morik M., Singh A., Hong J., Joachims T. (2020). Controlling Fairness and Bias in Dynamic Learning-to-Rank. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’20), July 25–30, 2020, Virtual Event, China. ACM, New York, NY, USA.</mixed-citation><mixed-citation xml:lang="en">Pu P., Chen L., Hu R. (2012). Evaluating recommender systems from the user’s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction. Vol. 22. No. 10. P. 317-355.</mixed-citation></citation-alternatives></ref><ref id="cit47"><label>47</label><citation-alternatives><mixed-citation xml:lang="ru">Newman N. (2020). Journalism, media and technology: trends and predictions for 2020. London: Reuters Institute for the Study of Journalism &amp; Oxford University.</mixed-citation><mixed-citation xml:lang="en">Ragnedda M. (2020). Enhancing Digital Equity. Connecting the Digital Underclass. Cham: Palgrave Macmillan.</mixed-citation></citation-alternatives></ref><ref id="cit48"><label>48</label><citation-alternatives><mixed-citation xml:lang="ru">Nguyen T.T., Hui P.M., Harper F.M, Terveen L., Konstan J.A. (2014). Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity. In: Proceedings of the 23rd International Conference on World Wide Web. Association for Computing Machinery. New York. P. 677-686.</mixed-citation><mixed-citation xml:lang="en">Ricci F., Rokach L., Shapira B. (2015). Recommender Systems: Introduction and Challenges. In: Recommender Systems Handbook. P. 1-34.</mixed-citation></citation-alternatives></ref><ref id="cit49"><label>49</label><citation-alternatives><mixed-citation xml:lang="ru">Pariser E. (2011). The filter bubble: What the Internet is hiding from you. Penguin.</mixed-citation><mixed-citation xml:lang="en">Sharkov F.I., Silkin V.V. (2021). Genesis of the sociology of media space. RUDN Journal of Sociology. Vol. 21. No. 3. P. 557-566. DOI: 10.22363/2313-2272-2021-21-3-557-566 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit50"><label>50</label><citation-alternatives><mixed-citation xml:lang="ru">Pu P., Chen L., Hu R. (2012). Evaluating recommender systems from the user’s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction. Vol. 22. No. 10. P. 317-355</mixed-citation><mixed-citation xml:lang="en">Soffer O. (2021). Algorithmic Personalization and the Two-Step Flow of Communication. Communication Theory. Vol. 31. No. 3. P. 297-315.</mixed-citation></citation-alternatives></ref><ref id="cit51"><label>51</label><citation-alternatives><mixed-citation xml:lang="ru">Ragnedda M. (2020). Enhancing Digital Equity. Connecting the Digital Underclass. Cham: Palgrave Macmillan.</mixed-citation><mixed-citation xml:lang="en">Sukhodolov A.P., Bychkova A.M., Ovanesyan S.S. (2019). Journalism with Artificial Intelligence. Issues of Theory and Practice of Journalism. No. 4. P. 647-667 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit52"><label>52</label><citation-alternatives><mixed-citation xml:lang="ru">Ricci F., Rokach L., Shapira B. (2015). Recommender Systems: Introduction and Challenges. In: Recommender Systems Handbook. P. 1-34.</mixed-citation><mixed-citation xml:lang="en">Sunstein C.R. (2001). Echo chambers. Princeton: Princeton University Press.</mixed-citation></citation-alternatives></ref><ref id="cit53"><label>53</label><citation-alternatives><mixed-citation xml:lang="ru">Soffer O. (2021). Algorithmic Personalization and the Two-Step Flow of Communication. Communication Theory. Vol. 31. No. 3. P. 297-315.</mixed-citation><mixed-citation xml:lang="en">Toloknev K.A. (2022). Invisible Political Commissar: How Personalization Algorithms Shape Public Opinion. Polity. No. 4 (107). P. 63-82 (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit54"><label>54</label><citation-alternatives><mixed-citation xml:lang="ru">Sunstein C.R. (2001). Echo chambers. Princeton: Princeton University Press.</mixed-citation><mixed-citation xml:lang="en">Túñez-López J.M., Fieiras Ceide C., Vaz-Álvarez M. (2021). Impact of Artificial Intelligence on Journalism: transformations in the company, products, contents and professional profile. Communication &amp; Society. Vol. 34. No. 1. P. 177-193.</mixed-citation></citation-alternatives></ref><ref id="cit55"><label>55</label><citation-alternatives><mixed-citation xml:lang="ru">Túñez-López J.M., Fieiras Ceide C., Vaz-Álvarez M. (2021). Impact of Artificial Intelligence on Journalism: transformations in the company, products, contents and professional profile. Communication &amp; Society. Vol. 34. No. 1. P. 177-193.</mixed-citation><mixed-citation xml:lang="en">Túñez-López M., Toural-Bran C., Cacheiro-Requeijo S. (2018). Uso de bots y algoritmos para automatizar la redacción de noticias: percepción y actitudes de los periodistas en España. El profesional de la información. Vol. 27. No. 4. P. 750-758.</mixed-citation></citation-alternatives></ref><ref id="cit56"><label>56</label><citation-alternatives><mixed-citation xml:lang="ru">Túñez-López M., Toural-Bran C., Cacheiro-Requeijo S. (2018). Uso de bots y algoritmos para automatizar la redacción de noticias: percepción y actitudes de los periodistas en España. El profesional de la información. Vol. 27. No. 4. P. 750-758.</mixed-citation><mixed-citation xml:lang="en">Turner Lee N., Resnick P. Barton G. (2019). Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings Inst.</mixed-citation></citation-alternatives></ref><ref id="cit57"><label>57</label><citation-alternatives><mixed-citation xml:lang="ru">Turner Lee N., Resnick P. Barton G. (2019). Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings Inst.</mixed-citation><mixed-citation xml:lang="en">Van Dijck J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford University Press.</mixed-citation></citation-alternatives></ref><ref id="cit58"><label>58</label><citation-alternatives><mixed-citation xml:lang="ru">Van Dijck J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford University Press.</mixed-citation><mixed-citation xml:lang="en">Volodenkov S.V. (2021). Internet communications in the global space of modern political governance: towards a digital society. Moscow: Prospect (in Rus.).</mixed-citation></citation-alternatives></ref><ref id="cit59"><label>59</label><citation-alternatives><mixed-citation xml:lang="ru">Vosoughi S., Roy D., Aral S. (2018). The spread of true and false news online. Science. Vol. 359. Iss. 6380. P. 1146-1151.</mixed-citation><mixed-citation xml:lang="en">Vosoughi S., Roy D., Aral S. (2018). The spread of true and false news online. Science. Vol. 359. Iss. 6380. P. 1146-1151.</mixed-citation></citation-alternatives></ref><ref id="cit60"><label>60</label><citation-alternatives><mixed-citation xml:lang="ru">Yang X., Zhang L., Feng Z. (2023). Personalized Tourism Recommendations and the ETourism User Experience. Journal of Travel Research. Vol. 63. Iss. 5. DOI: 10.1177/00472875231187332.</mixed-citation><mixed-citation xml:lang="en">Yang X., Zhang L., Feng Z. (2023). Personalized Tourism Recommendations and the ETourism User Experience. Journal of Travel Research. Vol. 63. Iss. 5. DOI: 10.1177/00472875231187332.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
