Research Article | | Peer-Reviewed

Willingness of Users to Accept AI Content Creation

Received: 8 April 2025     Accepted: 8 May 2025     Published: 26 June 2025
Views:       Downloads:
Abstract

This study investigates the impact of perceived content quality and frequency of interaction with AI-generated materials on users' willingness to accept such automated content without additional human editing. Given the expanding role of artificial intelligence in digital communications, exploring user acceptance of AI-produced content is increasingly important. Utilizing a quantitative research method, data was collected from 1,118 internet users familiar with digital content via computer-assisted web interviewing (CAWI). Statistical techniques, specifically Spearman’s correlation and ordinal logistic regression, were employed to pinpoint essential determinants of acceptance. Findings revealed that a higher perceived quality of AI-generated content significantly enhances user willingness to accept it without human review. Conversely, the analysis showed a slight negative correlation regarding interaction frequency, indicating that repeated exposure could heighten users' awareness of imperfections inherent in AI-generated materials, thereby potentially decreasing their trust and willingness to adopt it autonomously. These findings highlight the strategic importance of prioritising content quality over exposure frequency. Limitations regarding the representativeness of the sample and the moderate explanatory power of the statistical model indicate the need for future research to explore additional moderating factors, such as digital literacy, demographic characteristics, and general attitudes towards innovation.

Published in European Business & Management (Volume 11, Issue 2)
DOI 10.11648/j.ebm.20251102.12
Page(s) 40-47
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Artificial Intelligence, Automated Content Generation, Trust in AI, AI-generated Content, Human-AI Interaction, Technology Adoption

1. Introduction
Today, the digital environment plays a key channel not only in informing users, but also in influencing their decisions and attitudes. The increasing demand for personalised content and the pressure to increase efficiency in the production of different types of text has led to a rise in the use of Artificial Intelligence (AI) in automated content creation. The article first presents a theoretical framework, followed by methodology, results, discussion, and conclusions.
2. Theoretical Framework
Today, the use of AI for content generation is no longer a futuristic vision, but a real and established practice in fields such as marketing, journalism, e-commerce, and education . There are several benefits of automating content creation, including a significant acceleration of production, the ability to generate content in virtually unlimited quantities while minimizing the cost of content creation . However, the above benefits do not automatically guarantee user acceptance as the quality of automated content creation is perceived differently. Therefore, the question remains what is the relationship between the perceived quality of AI content and the willingness of users to accept content without human control and intervention. The quality of AI-generated content can be assessed in a number of ways, with key attributes tending to be relevance, comprehensibility, accuracy and originality . Advancements in algorithms have significantly enhanced their capacity to create content comparable to human outputs, largely driven by improvements in language models capable of producing grammatically correct and contextually coherent texts . Nevertheless, concerns persist regarding the accuracy, depth, and authenticity of AI-generated material, potentially affecting user trust . Prior studies have indicated that users typically assess automated content by comparing it directly to content crafted by humans. Researchers underline that user perceptions of AI content quality are strongly influenced by individual expectations and their previous familiarity or understanding of AI technologies . Consequently, frequent exposure to positively evaluated AI-generated content can enhance users' acceptance of automated outputs without human intervention. Users’ willingness to accept content generated by AI without subsequent human editing is influenced not only by perceived quality but also significantly by their level of regular interaction with such content. Drawing from social cognitive theory, it can be argued that people's attitudes and preferences are shaped by their personal experiences and interactions with emerging technologies . Increased exposure to AI-generated content may facilitate gradual acceptance and help reduce initial biases against such technology. Research supports the notion that repeated positive interactions with technological innovations encourage their integration into everyday usage. Conversely, negative experiences, particularly encounters with substandard content, can drive users back towards human-produced content . Hence, interaction frequency is essential in shaping users' acceptance of AI-generated material without the need for human oversight, a concept central to exploring the hypothesis linking contact frequency with acceptance rates. User trust emerges as another critical factor impacting their openness to AI-generated content. Trust, within technology acceptance frameworks, is commonly understood as users' confidence in a system's reliability and capability to meet their expectations . If users perceive AI-generated content as accurate, valuable, and reliable, they become more inclined to accept it independently of human verification. Conversely, diminished trust—often resulting from perceived lack of transparency or accuracy concerns—can reinforce preferences for content verified or edited by humans. Consequently, examining how perceived quality impacts trust, and how regular exposure to AI-generated content may either strengthen or weaken that trust, is of particular relevance . To analyze factors influencing acceptance of AI-generated content, this research employs the Technology Acceptance Model (TAM) as its primary analytical framework. TAM remains among the most widely adopted theoretical models for predicting user acceptance of new technologies, emphasizing two fundamental determinants: perceived usefulness and perceived ease of use . Within this study, the adoption of TAM is justified mainly due to its emphasis on perceived usefulness, closely linked to perceptions of AI-generated content quality. Users' readiness to accept automated content without human adjustments depends primarily on how beneficial, relevant, and valuable they perceive such content to be . High perceived usefulness of AI-generated material significantly enhances its acceptance without human oversight, establishing the theoretical foundation for considering perceived content quality as a crucial determinant of acceptance. Moreover, integrating trust theory further enriches the Technology Acceptance Model within this research context . Trust theory posits that users' willingness to adopt and rely on technological innovations heavily depends on their trust in these technologies’ reliability and effectiveness. Trust in AI specifically is defined by users’ perceptions of the system's consistency, competence, and intent to meet expectations . Given the difficulty users face in objectively assessing the technical complexity of AI-generated content, their acceptance largely hinges on subjective trust evaluations . User trust evolves directly from the perceived reliability and quality of AI outputs and can decline through repeated exposure to substandard or inaccurate content . Hence, combining trust theory with TAM provides deeper insight into the multifaceted nature of accepting AI-generated content. Perceived usefulness positively influences acceptance, while trust derived from trust theory operates as a moderating factor shaped by content accuracy and exposure frequency. Increased perceived quality strengthens user trust, enhancing their willingness to autonomously accept AI-generated content. Conversely, frequent interactions highlighting AI's shortcomings can erode trust, consequently diminishing acceptance levels. Based on the above theoretical findings, it can be assumed that the perceived quality of AI-generated content and the frequency of contact with it are significant factors influencing users' willingness to accept said content without human editing (Figure 1).
Figure 1. Conceptual model of factors influencing AI content acceptance.
The following research question and hypotheses are based on the above context:
Research Question (RQ):
To what extent does the quality rating of AI-generated content and the frequency of exposure to it influence consumers' willingness to accept AI-generated content without human editing?
Hypothesis:
H1: Higher ratings for the quality of AI-generated content are associated with a higher willingness to accept content without human correction.
H2: More frequent exposure to AI-generated content positively influences the acceptance of fully automated content.
3. Methodology
In order to test the stated hypotheses and answer the research question, a quantitative research strategy was chosen, using a questionnaire survey method combined with statistical data analysis. The quantitative research design enables an objective assessment of the relationships between variables and determination of the degree of their influence on the phenomenon under study. The research population consisted of Internet users who were selected by random sampling method. Respondents who indicated that they had experience of using digital content were included in the research. A total of 1,309 respondents were included in the research, and after excluding incomplete questionnaires, 1,118 valid responses were included in the final analysis. The sample size is sufficiently representative of the target population and allows for credible statistical calculations to be made. To enhance the external validity and contextual understanding of the findings, the survey also collected demographic data, including gender (67.2% female, 31.7% male), age distribution (18–28 years: 14.8%; 29–44 years: 14.9%; 45–60 years: 27.4%; 61–79 years: 39.7%; older: 3.1%), and self-reported technical literacy regarding AI technology usage (Never: 37.4%; Rarely: 20.2%; Occasionally: 20.3%; Frequently: 17.4%; Always: 4.7%). Despite efforts at random sampling, the demographic profile might limit generalizability, as the sample skews towards older respondents and includes uneven gender distribution. The variables used in the research were operationalized as follows. The dependent variable was the willingness of the respondents to accept AI-generated content without human editing, specifically titled willingness to accept AI content. It was measured using an ordinal scale with three categories, namely, I am not willing, I am willing if the content is of good quality, and I am always willing. The independent variables were titled perceived quality of AI-generated content specifically titled as AI quality, which was measured with a four-point scale of poor average, good, excellent, and frequency of respondents' exposure to AI-generated content specifically titled as have you encountered AI content, which was operationalized with three categories of never, occasionally, often. Respondents were asked to specify the types of AI-generated content they had encountered most frequently, primarily identifying social media posts (78.2%), marketing campaigns (44.4%), chatbots (33.6%), and online media such as blogs or news websites (26.8%). The data collection was done through an online questionnaire (CAWI) using Google Forms tool which was distributed over a period of three months. Respondents were informed at the outset about the objectives of the research, the voluntary nature of participation, the anonymity of the data, and the possibility to discontinue the research at any time. The validity of the questionnaire used was ensured by pilot testing on a sample of 30 respondents, after which some questions were modified to eliminate possible ambiguities. The statistical software IBM SPSS, version 30, was used to analyse the data obtained. The results of the analyses were supported by visualisation using MS Excel. The following statistical methods were applied in data processing. Descriptive statistics included the evaluation of basic variables such as medians, frequencies and percentage distribution of responses for each variable.
Spearman's correlation analysis was used due to the ordinal nature of the variables under study. The calculation of Spearman's correlation coefficient (rs) was carried out according to the standard formula:
rs = 1 - [6 × ∑di² / (n³ - n)](1)
where di is the difference of the ordinal numbers of the pairs of values and n is the number of observations.
The aim of the analysis was to test the relationships between the independent variables called AI Quality, frequency of exposure to AI-generated content and the dependent variable called Willingness to Accept AI Content. Ordinal logistic regression was applied to examine in detail the relationships between the ordinal dependent variable and the independent variables. The model used a logit link-function which has the following equation form:
logit(P(Y ≤ j)) = αj - (β1X1+ β2X2+... + βnXn)(2)
where P(Y ≤ j) denotes the cumulative probability of belonging to a particular or lower category of the dependent variable, αj represent the threshold parameters, and β1 to βn are the regression coefficients of the independent variables AI quality and frequency of exposure to AI-generated content. The model analysis also included a parallel lines test to test the assumption of equality of regression coefficients between categories of the dependent variable. Given the detected violation of the parallel lines assumption (p = 0.027) and relatively low pseudo R² values (Nagelkerke R² = 0.084), additional caution was exercised in interpreting the regression results. The research was conducted in accordance with basic ethical standards of scientific research. The complete anonymity of the respondents was guaranteed and the data were used for scientific purposes only. Respondents had the option to discontinue their participation at any time, and were explicitly informed of this option at the beginning of the questionnaire.
4. Results
The following section contains the interpretation of the individual results and the verification of the research question and hypotheses. Data analysis was conducted on a sample of 1,309 respondents, and after eliminating incomplete responses, 1,118 valid responses were included in the statistical processing (Table 1).
Table 1. Case Processing Summary.

N

Marginal Percentage

Acceptance of AI content

1

707

63,2%

2

96

8,6%

3

206

18,4%

4

109

9,7%

Valid

1118

100,0%

Missing

191

Total

1309

The majority of respondents (63.2%) said they only accept AI-generated content without editing if it is of high quality. Approximately 18.4% of respondents said they always accept automated content, while 8.6% are only willing to accept AI content occasionally. Less than a tenth of respondents (9.7%) said they do not accept automated content without human editing at all.
Table 2. Correlations.

AI quality

AI encounter frequency

Acceptance of AI content

Spearman's rho

AI quality

Correlation Coefficient

1,000

,050

,270**

Sig. (2-tailed)

.

,097

<,001

N

1119

1118

1119

AI encounter frequency

Correlation Coefficient

,050

1,000

-,013

Sig. (2-tailed)

,097

.

,644

N

1118

1306

1306

Acceptance of AI content

Correlation Coefficient

,270**

-,013

1,000

Sig. (2-tailed)

<,001

,644

.

N

1119

1306

1307

**. Correlation is significant at the 0.01 level (2-tailed).

The results of the Spearman correlation analysis (Table 2) indicate a statistically significant positive relationship between the perceived quality of AI-generated content, specifically the variable named AI Quality, and respondents' willingness to accept AI-generated content (rs = 0.270; p < 0.001). The result confirms that higher perceived content quality is associated with higher willingness to accept automated content without human intervention. In contrast, the relationship between AI-generated content encounter frequency and willingness to accept such content was not statistically significant (rs = -0.013; p = 0.644), suggesting that frequency of encounter alone is not directly related to the rate of acceptance of automated content.
Table 3. Model Fitting Information.

Model

-2 Log Likelihood

Chi-Square

df

Sig.

Intercept Only

302,014

Final

216,645

85,369

2

<,001

Link function: Logit.

The results of ordinal logistic regression (Table 3) show that the final model is statistically significant (χ² = 85.369; df = 2; p < 0.001), indicating that the inclusion of the independent variables AI Quality and AI Encounter Frequency significantly improves the ability to predict respondents' willingness to accept AI-generated content over a model containing only a constant.
Table 4. Goodness-of-Fit.

Chi-Square

df

Sig.

Pearson

75,288

43

,002

Deviance

77,051

43

,001

Link function: Logit.

The goodness-of-fit tests (Table 4) indicate that the model shows some disagreement between predicted and observed values (Pearson χ² = 75.288; df = 43; p = 0.002; Deviance χ² = 77.051; df = 43; p = 0.001). However, the result should be interpreted as a possible disagreement due to the sample size, which may modify the above tests.
Table 5. Pseudo R-Square.

Cox and Snell

,074

Nagelkerke

,084

McFadden

,037

Link function: Logit.

The pseudo R² values (Cox and Snell = 0.074; Nagelkerke = 0.084; McFadden = 0.037) indicate a relatively low proportion of explained variability (Table 5) in the dependent variable by the model. That is, the quality of AI-generated content and the frequency of exposure to AI content explain only some of the variability in respondents' willingness to accept AI content without modification. These results should be interpreted with caution due to relatively low explanatory power (pseudo R²) and violation of the parallel lines assumption, suggesting a need for more advanced regression techniques in future studies.
Table 6. Parameter Estimates.

Estimate

Std. Error

Wald

df

Sig.

95% Confidence Interval

Lower Bound

Upper Bound

Threshold

[AI content acceptance = 1]

1,383

,328

17,813

1

<,001

,741

2,025

[AI content acceptance = 2]

1,804

,330

29,940

1

<,001

1,158

2,450

[AI content acceptance = 3]

3,146

,341

85,335

1

<,001

2,478

3,813

Location

AI quality

,576

,066

75,148

1

<,001

,445

,706

AI encounter frequency

-,182

,085

4,559

1

,033

-,349

-,015

Link function: Logit.

The ordinal logistic regression results also indicate a significant positive relationship between the perceived quality of AI-generated content and respondents' willingness to accept this content (β = 0.576; Wald = 75.148; p < 0.001). Increasing ratings of AI content quality increases the likelihood of higher acceptance rates. Conversely, frequency of encounter with AI-generated content showed a negative relationship with respondents' willingness to accept such content (β = -0.182; Wald = 4.559; p = 0.033), indicating that, paradoxically, respondents with more frequent encounters with AI content showed a slightly lower willingness to accept such content without human editing.
Table 7. Test of Parallel Lines.

Model

-2 Log Likelihood

Chi-Square

df

Sig.

Null Hypothesis

216,645

General

205,692

10,953

4

,027

The null hypothesis states that the location parameters (slope coefficients) are the same across response categories.

Link function: Logit.

The parallel lines test (Table 7) shows a violation of the assumption of parallelism of the regression coefficients across levels of the dependent variable (χ² = 10.953; df = 4; p = 0.027). The result suggests that the relationship between the independent variables and willingness to accept AI content may vary depending on the specific level of the dependent variable.
5. Discussion
The main objective of the present paper was to investigate the relationship between the quality of content generated by artificial intelligence (AI), the frequency of users' contact with this content, and their willingness to accept it without human editing. The results of the analysis carried out yielded several significant findings that need to be interpreted in more detail and placed in the context of existing knowledge. The results of Spearman correlation clearly show that the quality of AI-generated content is a significant factor that positively influences the level of user acceptance of that content. The aforementioned relationship was further confirmed by ordinal logistic regression, with the AI quality parameter being the strongest predictor of respondents' willingness to accept AI content. The findings confirm hypothesis H1 that users are more willing to accept automated content if they perceive its quality as high or excellent. The result is consistent with the author's proposition, which suggests that it is perceived quality that plays a critical role in users' decision to accept automated content. Users are less critical of AI-generated content if they perceive it to be trustworthy and of high quality . Therefore, it can be argued that high perceived quality can significantly alleviate users' biases and doubts about the reliability of automated content production. On a practical level, the above findings suggest that organizations using AI for content production should prioritize attention to the quality and content value of the generated outputs. User perception of quality may even be a decisive factor in the successful implementation of AI solutions in different domains such as media, marketing or education . Another result that was elucidated is that the frequency of respondents' exposure to AI content did not have a positive relationship with the willingness to accept content without human editing. On the contrary, the ordinal logistic regression results showed a slightly negative relationship. The above result rejects hypothesis H2, which states that more frequent contact with AI-generated content positively influences its acceptance. One possible interpretation of the above finding is that more frequent contact may have led respondents to better recognize the shortcomings of automated content in the form of inaccuracy, repetition, or lack of originality, which may have paradoxically reduced their trust in automated content . The above interpretation is in line with the authors' assertions, who point out that increased exposure to AI content may increase users' critical faculties, and as a result, users may become increasingly sensitive to its imperfections . On the other hand, the lack of or very weak correlation according to Spearman's analysis suggests that the relationship between frequency of contact and willingness to accept may not be absolute and may be influenced by other factors, such as technological literacy, perceived risk, or general attitudes towards artificial intelligence . The relatively low pseudo R² values suggest that although the investigated factors in the form of AI Content Quality and AI Encounter Frequency are statistically significant, they explain only a small part of the variability in the willingness to accept content without human editing. Therefore, it is likely that there are other variables that have a major influence on respondents' attitudes towards automated content. These factors could be, for example, individual differences in digital literacy, level of privacy concerns, or trust in technological innovation . The findings support TAM by demonstrating that perceived usefulness (reflected in perceived quality) significantly predicts user acceptance. Trust theory was also relevant, as users who encounter frequent imperfections show decreased trust and lower acceptance. Future research could therefore also focus on the other factors mentioned above. A limitation may be the unbalanced sample, which may limit the generalizability of the results, as well as other variables that were not taken into account and may influence the resulting model. Future research could use a larger and more representative sample, or a structured selection of respondents by age, education or level of experience with technology. The results have clear implications for macro managers. Businesses using AI to generate content should focus on maximizing the quality of the content generated, which has been shown to be key to positive user adoption. Conversely, simply increasing the frequency of automated content without ensuring quality can be counterproductive. An important recommendation for marketers is to prioritize the qualitative aspects of AI-generated content over its quantity. In conclusion, the present study contributes to a better understanding of the dynamics of the relationship between users and automated content, highlighting the importance of quality as a crucial factor in its acceptance. Further research should confirm these findings on different focus groups and explore the deeper factors behind these attitudes . Further analysis could explore digital literacy, educational background, and prior attitudes towards AI as moderators that could potentially influence user acceptance. Future studies are recommended to apply alternative modeling strategies, such as generalized ordered logistic regression, to address these methodological concerns more robustly.
6. Conclusions
The main objective of the present paper was to analyze the relationship between the perceived quality of AI-generated content, the frequency of users' contact with this content, and their willingness to accept automated content without the need for human correction. The research was conducted through a quantitative questionnaire survey on a final sample of 1118 respondents, and two main hypotheses were tested. The results confirmed hypothesis H1, which states that higher ratings of the quality of AI-generated content increases users' willingness to accept it without human editing. The analysis showed that users respond significantly positively to high quality AI-generated content, with the aforementioned factor identified as a key predictor of their acceptance. On the other hand, hypothesis H2, which predicted a positive effect of frequency of exposure to AI-generated content on willingness to accept it, was not confirmed. On the contrary, the analysis indicated a slightly negative relationship, suggesting that more frequent contact may lead to higher user sensitivity to the shortcomings of automated content. The above findings suggest practical recommendations, particularly for AI content marketing managers, where the most important strategy to increase user acceptance of automated content should be to ensure its high quality and content value. Conversely, simply increasing the frequency of contact without simultaneously increasing quality can paradoxically have the opposite effect. Future research should include more diverse samples, utilize advanced statistical models (such as generalized ordered logistic regression), and investigate deeper psychological and contextual factors influencing AI content acceptance. Despite certain limitations of the study, the present results provide significant insights into the factors influencing consumers' willingness to adopt automated content, and represent valuable contributions to the burgeoning field of artificial intelligence research in digital content creation.
Abbreviations

CAWI

Computer Assisted Web Interviewing

AI

Artificial Intelligence

TAM

Technology Acceptance Model

Author Contributions
Michal Kubovics is the sole author. The author read and approved the final manuscript.
Funding
Funded by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under the project No. 09I03-03-V04-00367.
Data Availability Statement
The data is available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] ALQAHTANI, Tariq et al. 2023. The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Research in Social and Administrative Pharmacy. Vol. 19, no. 8, pp. 1236-1242.
[2] AMANKWAH-AMOAH, Joseph et al. 2024. The impending disruption of creative industries by generative AI: Opportunities, challenges, and research agenda. International Journal of Information Management. Vol. 79, s. 102759.
[3] ELKHATAT, Ahmed M., ELSAID, Khaled and ALMEER, Saeed, 2023. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity. Vol. 19, no. 1, p. 17.
[4] FEUERRIEGEL, Stefan et al. 2024. Generative AI. Business & Information Systems Engineering. Vol. 66, no. 1, pp. 111-126.
[5] MOLINA, Maria D. and SUNDAR, S. Shyam, 2024. Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society. vol. 26, no. 6, pp. 3638-3656.
[6] CHANDRA, Shalini, SHIRISH, Anuragini and SRIVASTAVA, Shirish C., 2022. To Be or Not to Be... Human? Theorizing the Role of Human-Like Competencies in Conversational Artificial Intelligence Agents. Journal of Management Information Systems. Vol. 39, No. 4, pp. 969-1005.
[7] KANG, Hyunjin and LOU, Chen, 2022. AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement. HUMPHREYS, Lee (ed.), Journal of Computer-Mediated Communication. Vol. 27, No. 5, p. zmac014.
[8] OVIEDO-TRESPALACIOS, Oscar et al. 2023. The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science. Vol. 167, s. 106244.
[9] CHOUNG, Hyesun, DAVID, Prabu and ROSS, Arun, 2023. Trust in AI and Its Role in the Acceptance of AI Technologies. international Journal of Human-Computer Interaction. Vol. 39, no. 9, pp. 1727-1739.
[10] KAUR, Davinder et al. 2023. Trustworthy Artificial Intelligence: A Review. ACM Computing Surveys. Vol. 55, no. 2, pp. 1-38.
[11] LIU, Guangxiang a MA, Chaojun, 2024. Measuring EFL learners’ use of ChatGPT in informal digital learning of English based on the technology acceptance model. Innovation in Language Learning and Teaching. Vol. 18, č. 2, s. 125–138.
[12] SU, Diep Ngoc et al., 2022. Modeling consumers’ trust in mobile food delivery apps: perspectives of technology acceptance model, mobile service quality and personalization-privacy theory. Journal of Hospitality Marketing & Management. Vol. 31, č. 5, s. 535–569.
[13] CASTELO, Noah et al., 2023. Understanding and Improving Consumer Reactions to Service Bots. COTTE, June a WERTENBROCH, Klaus (ed.), Journal of Consumer Research. Vol. 50, č. 4, s. 848–863.
[14] SCHEPMAN, Astrid a RODWAY, Paul, 2023. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. International Journal of Human–Computer Interaction. Vol. 39, č. 13, s. 2724–2741.
[15] GILAT, Ron and COLE, Brian J., 2023. How Will Artificial Intelligence Affect Scientific Writing, Reviewing and Editing? The Future is Here.... Arthroscopy: The Journal of Arthroscopic & Related Surgery. Vol. 39, No. 5, pp. 1119-1120.
[16] ZHOU, Jiawei et al., 2023. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. V: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, s. 1–20. Hamburg Germany: ACM. 19 apríl 2023. ISBN 9781450394215.
[17] KSHETRI, Nir et al. 2024. Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. International Journal of Information Management. Vol. 75, s. 102716.
[18] BRYNJOLFSSON, Erik, 2022. The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus. Vol. 151, no. 2, pp. 272-287.
[19] WACH, Krzysztof et al. 2023. The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review. Vol. 11, no. 2, pp. 7-30.
[20] MENON, Devadas and SHILPA, K, 2023. "Chatting with ChatGPT": Analyzing the factors influencing users' intention to use Open AI's ChatGPT using the UTAUT model. Heliyon. Vol. 9, no. 11, p. e20962.
[21] BÜCHI, Moritz, FESTIC, Noemi and LATZER, Michael, 2022. the Chilling Effects of Digital Dataveillance: a Theoretical Model and an Empirical Research Agenda. Big Data & Society. vol. 9, No. 1, p. 20539517211065368.
Cite This Article
  • APA Style

    Kubovics, M. (2025). Willingness of Users to Accept AI Content Creation. European Business & Management, 11(2), 40-47. https://doi.org/10.11648/j.ebm.20251102.12

    Copy | Download

    ACS Style

    Kubovics, M. Willingness of Users to Accept AI Content Creation. Eur. Bus. Manag. 2025, 11(2), 40-47. doi: 10.11648/j.ebm.20251102.12

    Copy | Download

    AMA Style

    Kubovics M. Willingness of Users to Accept AI Content Creation. Eur Bus Manag. 2025;11(2):40-47. doi: 10.11648/j.ebm.20251102.12

    Copy | Download

  • @article{10.11648/j.ebm.20251102.12,
      author = {Michal Kubovics},
      title = {Willingness of Users to Accept AI Content Creation},
      journal = {European Business & Management},
      volume = {11},
      number = {2},
      pages = {40-47},
      doi = {10.11648/j.ebm.20251102.12},
      url = {https://doi.org/10.11648/j.ebm.20251102.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ebm.20251102.12},
      abstract = {This study investigates the impact of perceived content quality and frequency of interaction with AI-generated materials on users' willingness to accept such automated content without additional human editing. Given the expanding role of artificial intelligence in digital communications, exploring user acceptance of AI-produced content is increasingly important. Utilizing a quantitative research method, data was collected from 1,118 internet users familiar with digital content via computer-assisted web interviewing (CAWI). Statistical techniques, specifically Spearman’s correlation and ordinal logistic regression, were employed to pinpoint essential determinants of acceptance. Findings revealed that a higher perceived quality of AI-generated content significantly enhances user willingness to accept it without human review. Conversely, the analysis showed a slight negative correlation regarding interaction frequency, indicating that repeated exposure could heighten users' awareness of imperfections inherent in AI-generated materials, thereby potentially decreasing their trust and willingness to adopt it autonomously. These findings highlight the strategic importance of prioritising content quality over exposure frequency. Limitations regarding the representativeness of the sample and the moderate explanatory power of the statistical model indicate the need for future research to explore additional moderating factors, such as digital literacy, demographic characteristics, and general attitudes towards innovation.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Willingness of Users to Accept AI Content Creation
    AU  - Michal Kubovics
    Y1  - 2025/06/26
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ebm.20251102.12
    DO  - 10.11648/j.ebm.20251102.12
    T2  - European Business & Management
    JF  - European Business & Management
    JO  - European Business & Management
    SP  - 40
    EP  - 47
    PB  - Science Publishing Group
    SN  - 2575-5811
    UR  - https://doi.org/10.11648/j.ebm.20251102.12
    AB  - This study investigates the impact of perceived content quality and frequency of interaction with AI-generated materials on users' willingness to accept such automated content without additional human editing. Given the expanding role of artificial intelligence in digital communications, exploring user acceptance of AI-produced content is increasingly important. Utilizing a quantitative research method, data was collected from 1,118 internet users familiar with digital content via computer-assisted web interviewing (CAWI). Statistical techniques, specifically Spearman’s correlation and ordinal logistic regression, were employed to pinpoint essential determinants of acceptance. Findings revealed that a higher perceived quality of AI-generated content significantly enhances user willingness to accept it without human review. Conversely, the analysis showed a slight negative correlation regarding interaction frequency, indicating that repeated exposure could heighten users' awareness of imperfections inherent in AI-generated materials, thereby potentially decreasing their trust and willingness to adopt it autonomously. These findings highlight the strategic importance of prioritising content quality over exposure frequency. Limitations regarding the representativeness of the sample and the moderate explanatory power of the statistical model indicate the need for future research to explore additional moderating factors, such as digital literacy, demographic characteristics, and general attitudes towards innovation.
    VL  - 11
    IS  - 2
    ER  - 

    Copy | Download

Author Information