| Peer-Reviewed

Prospects and Challenges of Large Language Models in the Field of Intelligent Building

Received: 3 April 2023    Accepted: 8 May 2023    Published: 18 May 2023
Views:       Downloads:
Abstract

At the end of November 2022, the ChatGPT released by OpenAI Inc. performed excellently and quickly became popular worldwide. Despite some shortcomings, Large Language Models (LLM) represented by Generative Pre-trained Transformer (GPT) is here to stay, leading the way for the new generation of Natural Language Processing (NLP) technique. This commentary presents the potential benefits and challenges of the applications of large language models, from the viewpoint of intelligent building. We briefly discuss the history and current state of large language models and their shortcomings. We then highlight how these models can be used to improve the daily maintenance of intelligent building. With regard to challenges, we address some vital problems to be solved before deployment and argue that large language models in intelligent building require maintenance staff to develop sets of competencies and literacies necessary to both understand the technology as well as the maintenance and maneuver of intelligent building. In addition, a clear strategy within intelligent building troops with a strong focus on AI talents construction and training dataset annotation are required to integrate and take full advantage of large language models in the daily maintenance. We conclude with recommendations for how to address these challenges and prepare for further applications of LLM in the field of intelligent building in the future.

Published in Automation, Control and Intelligent Systems (Volume 11, Issue 1)
DOI 10.11648/j.acis.20231101.13
Page(s) 15-20
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Artificial Intelligence, Large Language Models, Intelligent Building

References
[1] Z. Feng, D. Zhang, and G. Rao, “From Turing test to ChatGPT: a milestone of man-machine interaction and its enlightenment,” Chinese Journal of Language Policy and Planning, vol. 8, 2023, pp. 20-24.
[2] H. Lu, “ChatGPT Brings Huge Challenges to Traditional Search Engines,” China Economic Times, 2023-02-14, pp. 1.
[3] Q. Yang, Y. Zhao, “Domestic and foreign manufacturers accelerate their efforts to compete with AIGC,” The 21st Century Business Herald, 2023-02-08, pp. 11.
[4] T. Brown, B. Mann, and N. Ryder, “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, 2020, pp. 1877-1901.
[5] J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 8, 1982, pp. 79.
[6] W. Che, T. Liu, “New Paradigm of Natural Language Processing: A Method Based on Pre-Trained Models,” ZTE Technology Journal, vol. 28, 2022, pp. 3–9.
[7] A. Radford, K. Narasimhan, and T. Salimans, “Improving language understanding by generative pre-training,” OpenAI blog, 2018.
[8] A. Vaswani, N. Shazeer, and N. Parmar. “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017, pp. 48.
[9] J. Wei, X. Wang, and D. Schuurmans, “Chain of thought prompting elicits reasoning in large language models,” arXiv preprint, 2022, arXiv: 2201.11903.
[10] L. Ouyang, J. Wu, and X. Jiang “Training language models to follow instructions with human feedback,” arXiv preprint, 2022, arXiv: 2203.02155.
[11] P. Jie, “Large language model for molecular chemistry,” Nature Computational Science, vol. 1, 2023, pp. 3.
[12] X. Qiu, T. Sun, and Y. Xu, “Pre-trained models for natural language processing: A survey,” Science China (Technological Sciences), vol. 10, 2020, pp. 63.
[13] S. Kalyan, A. Rajasekharan, S. Sangeetha, “Ammus: A survey of transformer-based pretrained models in natural language processing,” arXiv preprint, 2021, arXiv: 2108.05542.
[14] T. Schick, J. Dwivedi-Yu, and R. Dessì, “Toolformer: Language models can teach themselves to use tools,” arXiv preprint, 2023, arXiv: 2302.04761.
[15] A. Tamkin, M. Brundage, and J. Clark, “Understanding the capabilities, limitations, and societal impact of large language models,” arXiv preprint, 2021, arXiv: 2102.02503.
[16] S. Zheng, “Measured ChatGPT-4: Improved mathematical skills, poor temporal reasoning, difficult to understand the ‘Daiyu inverted pull willow’ network stem”, Sohu blog, 2023, https://www.sohu.com/a/654613394_115565
[17] H. Zhang, L. Li, Chun. Li, “ChatGPT Performance Evaluation on Chinese Language and Risk Measures,” Data Analysis and Knowledge Discovery, vol. 3, 2023, pp. 1-14.
[18] A. Abubakar, F. Maheen, Z. James, “Large language models associate Muslims with violence,” Nature Machine Intelligence, vol. 6, 2021, pp. 3.
[19] S. Wu, X. Wang, R. Xiao, “Problem and Perspective for Battery Researcher Based on Large Language Model,” Energy Storage Science and Technology, vol. 3, 2023, pp. 1-7.
Cite This Article
  • APA Style

    Wu Yang, Wang Junjie, Li Weihua. (2023). Prospects and Challenges of Large Language Models in the Field of Intelligent Building. Automation, Control and Intelligent Systems, 11(1), 15-20. https://doi.org/10.11648/j.acis.20231101.13

    Copy | Download

    ACS Style

    Wu Yang; Wang Junjie; Li Weihua. Prospects and Challenges of Large Language Models in the Field of Intelligent Building. Autom. Control Intell. Syst. 2023, 11(1), 15-20. doi: 10.11648/j.acis.20231101.13

    Copy | Download

    AMA Style

    Wu Yang, Wang Junjie, Li Weihua. Prospects and Challenges of Large Language Models in the Field of Intelligent Building. Autom Control Intell Syst. 2023;11(1):15-20. doi: 10.11648/j.acis.20231101.13

    Copy | Download

  • @article{10.11648/j.acis.20231101.13,
      author = {Wu Yang and Wang Junjie and Li Weihua},
      title = {Prospects and Challenges of Large Language Models in the Field of Intelligent Building},
      journal = {Automation, Control and Intelligent Systems},
      volume = {11},
      number = {1},
      pages = {15-20},
      doi = {10.11648/j.acis.20231101.13},
      url = {https://doi.org/10.11648/j.acis.20231101.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acis.20231101.13},
      abstract = {At the end of November 2022, the ChatGPT released by OpenAI Inc. performed excellently and quickly became popular worldwide. Despite some shortcomings, Large Language Models (LLM) represented by Generative Pre-trained Transformer (GPT) is here to stay, leading the way for the new generation of Natural Language Processing (NLP) technique. This commentary presents the potential benefits and challenges of the applications of large language models, from the viewpoint of intelligent building. We briefly discuss the history and current state of large language models and their shortcomings. We then highlight how these models can be used to improve the daily maintenance of intelligent building. With regard to challenges, we address some vital problems to be solved before deployment and argue that large language models in intelligent building require maintenance staff to develop sets of competencies and literacies necessary to both understand the technology as well as the maintenance and maneuver of intelligent building. In addition, a clear strategy within intelligent building troops with a strong focus on AI talents construction and training dataset annotation are required to integrate and take full advantage of large language models in the daily maintenance. We conclude with recommendations for how to address these challenges and prepare for further applications of LLM in the field of intelligent building in the future.},
     year = {2023}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Prospects and Challenges of Large Language Models in the Field of Intelligent Building
    AU  - Wu Yang
    AU  - Wang Junjie
    AU  - Li Weihua
    Y1  - 2023/05/18
    PY  - 2023
    N1  - https://doi.org/10.11648/j.acis.20231101.13
    DO  - 10.11648/j.acis.20231101.13
    T2  - Automation, Control and Intelligent Systems
    JF  - Automation, Control and Intelligent Systems
    JO  - Automation, Control and Intelligent Systems
    SP  - 15
    EP  - 20
    PB  - Science Publishing Group
    SN  - 2328-5591
    UR  - https://doi.org/10.11648/j.acis.20231101.13
    AB  - At the end of November 2022, the ChatGPT released by OpenAI Inc. performed excellently and quickly became popular worldwide. Despite some shortcomings, Large Language Models (LLM) represented by Generative Pre-trained Transformer (GPT) is here to stay, leading the way for the new generation of Natural Language Processing (NLP) technique. This commentary presents the potential benefits and challenges of the applications of large language models, from the viewpoint of intelligent building. We briefly discuss the history and current state of large language models and their shortcomings. We then highlight how these models can be used to improve the daily maintenance of intelligent building. With regard to challenges, we address some vital problems to be solved before deployment and argue that large language models in intelligent building require maintenance staff to develop sets of competencies and literacies necessary to both understand the technology as well as the maintenance and maneuver of intelligent building. In addition, a clear strategy within intelligent building troops with a strong focus on AI talents construction and training dataset annotation are required to integrate and take full advantage of large language models in the daily maintenance. We conclude with recommendations for how to address these challenges and prepare for further applications of LLM in the field of intelligent building in the future.
    VL  - 11
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Vocational-Technical Training Schools of Northmount Inc., Beijing, China

  • Vocational-Technical Training Schools of Northmount Inc., Beijing, China

  • Vocational-Technical Training Schools of Northmount Inc., Beijing, China

  • Sections