iuu.ai has over 6,500 AI websites and model information and is automatically updated by ChatGPT.
Reading: 25 2024-05-30
se factories.的专门指定汽车模型产品。Today, the cooperation between Beijing Merchants and BMW Group has become a model for the development of the industry. Beijing Merchants provided to the BM
Reading: 238 2019-03-26
SoulChat, a large model for mental health dialogue in the Chinese field Contribute to scutcyr/SoulChat development by creating an account on GitHub. SoulChat, based on the six characteristics of proactivity, prevention, precision, personalization, co construction and sharing, and self-discipline of active health, has opened up the ProActiveHealthGPT, a large-scale model base for active health in Chinese living spaces, which includes: BianQue, a life space health model fine tuned with millions of Chinese health dialogue data instructions SoulChat, a mental health model fine tuned through a combination of Chinese long text instructions and multiple rounds of empathy dialogue data in the field of millions of psychological counseling We hope that the ProActiveHealthGPT, a large-scale model for active health in living spaces, can help accelerate the research and application of large-scale models in the fields of chronic diseases, psychological counseling, and other active health areas in academia. This project is a mental health model called SoulChat.
Reading: 194 2023-07-23
QiZhenGPT is an open-source Chinese medical language model QiZhenGPT utilized the Qizhen Medical Knowledge Base to construct a Chinese medical instruction dataset, and based on this, fine tuned the instructions on the Chinese-LAMA-Plus-7B, CaMA-13B, and ChatGLM-6B models, significantly improving the effectiveness of the models in Chinese medical scenarios. Firstly, an evaluation dataset was released for drug knowledge Q&A. Subsequently, plans were made to optimize the Q&A effectiveness in areas such as disease, surgery, and testing, and expand applications such as doctor-patient Q&A and automatic medical record generation.
Reading: 136 2023-07-22
Pangu Big Model is committed to deeply cultivating industries, creating industry models and capability sets in fields such as finance, government affairs, manufacturing, mining, meteorology, and railways. It combines industry knowledge and expertise with big model capabilities, reshaping thousands of industries and becoming an expert assistant for organizations, enterprises, and individuals. Pangu NLP Large Model The industry's first Chinese pre trained large model with over 100 billion parameters, utilizing big data pre training, combining rich knowledge from multiple sources, and continuously learning to absorb massive text data to continuously improve the model's performance. Pangu CV Large Model Based on massive image and video data and Pangu's unique technology, a visual foundation model is constructed to empower industry customers to achieve specific scene tasks by fine-tuning the model with a small amount of scene data. Pangu Multimodal Large Model Integrating language and visual cross modal information to achieve applications such as image generation, image understanding, 3D generation, and video generation, providing a cross modal capability foundation for industrial intelligent transformation. Pangu Prediction Model The Pangu prediction big model is designed for structured data, based on 10 categories and 2000 base model spaces. Through a two-step optimization strategy of model recommendation and fusion, a graph network architecture AI model is constructed. Pangu Scientific Computing Large Model The scientific computing big model is aimed at fields such as meteorology, medicine, water affairs, machinery, aerospace, etc., using AI data modeling and AI equation solving methods; Extract mathematical laws from massive data and encode differential equations using neural networks; Using AI models to solve scientific computing problems faster and more accurately.
Reading: 134 2023-07-22
BianQue, based on the six characteristics of proactivity, prevention, precision, personalization, co construction and sharing, and self-discipline of active health, has opened up the ProActiveHealthGPT, a large-scale model base for active health in Chinese living spaces, which includes: BianQue, a life space health model fine tuned with millions of Chinese health dialogue data instructions SoulChat, a mental health model fine tuned through a combination of Chinese long text instructions and multiple rounds of empathy dialogue data in the field of millions of psychological counseling We hope that the ProActiveHealthGPT, a large-scale model for active health in living spaces, can help accelerate the research and application of large-scale models in the fields of chronic diseases, psychological counseling, and other active health areas in academia. This project is BianQue, a large-scale model of living space health.
Reading: 116 2023-07-23
AIGC text generation based on GPT language big model is a new creative approach. Daguan Data continues to explore and practice the development of enterprise service big language models. Based on long-term NLP practice and massive data accumulation, it has launched the domestic version of the ChatGPT model "Caozhi" system, which is the first to achieve product level application of AIGC intelligent writing in vertical fields Daguan Data is a national high-tech enterprise that provides intelligent text robots for enterprises in various scenarios. It has won the "Wu Wenjun Artificial Intelligence Award", the highest award in China's artificial intelligence field, the third batch of national specialized and new "little giant" enterprises of the Ministry of Industry and Information Technology, and has been selected as an international authoritative consulting group IDC innovator, KPMG Financial Technology Top 50, the world's top 30 best start-ups, China's top 50 artificial intelligence technology innovation and many other honorary qualifications. It has won the general champion of the Communist Youth League China Youth Innovation and Entrepreneurship Competition, the global champion of the ACMCIKM algorithm competition, and the global champion of the EMI Hackathon data competition. Daguan Data utilizes advanced technologies such as natural language processing (NLP), intelligent document processing (IDP), optical character recognition (OCR), robotic process automation (RPA), knowledge graph, etc. to provide intelligent text robot products for large enterprises and government agencies, including intelligent document review, office process automation, text recognition, enterprise level vertical search, intelligent recommendation, etc., allowing computers to assist manual completion of business process automation, greatly improving enterprise efficiency and intelligence level.
Reading: 65 2023-07-22
Under the wave of ChatGPT, the continuous expansion and development of artificial intelligence have provided fertile soil for the spread of LLM. Currently, the fields of healthcare, education, and finance have gradually developed their own models, but there has been no significant progress in the legal field. In order to promote open research on the application of LLM in law and other vertical fields, this project has open-source the Chinese legal model and provided a reasonable solution for the combination of LLM and knowledge base in legal scenarios. The current open source versions of ChatLaw legal model for academic reference are Jiangziya-13B and Anima-33B. We use a large amount of original texts such as legal news, legal forums, laws, judicial interpretations, legal consultations, legal exam questions, and judgment documents to construct dialogue data. The model based on Jiangziya-13B is the first version of the model. Thanks to Jiang Ziya's excellent Chinese language ability and our strict requirements for data cleaning and data augmentation processes, we perform well in logically simple legal tasks, but often perform poorly in complex logical legal reasoning tasks. Subsequently, based on Anima-33B, we added training data and created ChatLaw-33B, which showed a significant improvement in logical reasoning ability. Therefore, it can be seen that large parameter Chinese LLM is crucial. Our technical report is here: arXiv: ChatLaw The version trained based on commercially available models will be used as the internal integration version for our subsequent products and is not open source to the outside world. You can try out the open source version of the model here
Reading: 55 2023-07-23
The goal of this project is to promote the development of the open source community for Chinese dialogue big models, with the vision of becoming an LLM Engine that can help everyone. Compared to how to do well in pre training of large language models, BELLE focuses more on how to help everyone obtain their own language model with the best possible instruction expression ability on the basis of open-source pre training of large language models, and reduce the research and application threshold of large language models, especially Chinese large language models. To this end, the BELLE project will continue to open up instruction training data, related models, training code, application scenarios, etc., and will also continuously evaluate the impact of different training data, training algorithms, etc. on model performance. BELLE has been optimized for Chinese, and model tuning only uses data produced by ChatGPT (excluding any other data).
Reading: 54 2023-07-22
The Mencius pre training model is based on a large-scale pre training language model developed by the team. It can handle multilingual and multimodal data, while supporting multiple comprehension and generation tasks, and can quickly meet the needs of different fields and application scenarios. Committed to providing a new generation of cognitive intelligence platform for global enterprises based on NLP technology
Reading: 45 2023-07-22
The "Fengshen List" is a long-term open source project jointly maintained by a team of engineers, researchers, and interns from the Cognitive Computing and Natural Language Center of the International Digital Economy Academy (IDEA) in the Greater Bay Area of Guangdong, Hong Kong, and Macau. The "Fengshen List" open-source system will re-examine the entire Chinese pre training big model open-source community, comprehensively promote the development of the entire Chinese big model community, and aim to become the infrastructure for Chinese cognitive intelligence. Ziya's Universal Large Model V1 is a large-scale pre trained model based on LLaMa with 13 billion parameters, capable of translation, programming, text classification, information extraction, summarization, copy generation, common sense Q&A, and mathematical calculations. At present, the Jiang Ziya general large model has completed a three-stage training process including large-scale pre training (PT), multi task supervised fine-tuning (SFT), and human feedback learning (RLHF). Ziya's universal big model can assist human-machine collaboration in multiple application scenarios such as digital humans, copywriting, chatbots, business assistants, Q&A, and code generation, improving work and production efficiency. The "Fengshen List" is the largest open-source pre training model system in Chinese, with over 98 open-source pre training models currently available. At present, the first Chinese Stable Diffusion and CLIP model have been open sourced. Models such as Erlangshen UniMC have won multiple championships on lists such as FewCLUE/ZeroCLUE. Accumulate data and computing power into pre trained models with cognitive abilities, with the goal of becoming a solid foundation for massive downstream tasks and various algorithm innovation research. The GTS model production platform focuses on the field of natural language processing, serving numerous business scenarios such as intelligent customer service, data semantic analysis, recommendation systems, etc. It supports tasks such as e-commerce comment sentiment analysis, scientific literature subject classification, news classification, content review, etc Under the GTS training system, only a small number of training samples need to be input, and there is no need for AI model training related knowledge to obtain a lightweight small model that can be directly deployed.
Reading: 42 2023-07-22
The Tianyan Large Model is a multimodal large model (LMM) independently developed by APUS, which has the ability to understand and generate text, images, videos, and audios Founded in July 2014, APUS is a global mobile Internet company with product technology as its core competitiveness. Since its establishment, APUS has always adhered to the mission of "global network users, a better life through science and technology", and is committed to helping global users achieve the best use experience of mobile Internet. Now, it has comprehensively launched the "big AI" strategy and transformed itself into a "global AI enterprise" based on AI technology. The "footprint" of APUS covers more than 200 countries and regions in Europe, America, East Asia, Southeast Asia, South Asia and the Middle East, including 65 countries along the "the Belt and Road"; The product supports over 25 international languages and covers over 2.4 billion users worldwide.
Reading: 41 2023-07-23