Home>China>All Categories

Lonely readers

Lonely Reader, abbreviated as "Lonely Reader", was founded in 2014 by a group of graduates from Ivy League schools and Peking University Yanyuan. It is a cultural communication company dedicated to humanities and liberal arts education, aiming to build knowledge structures, reshape critical thinking systems, and enhance foreign language proficiency. The isolated teachers hope that with the help of the Internet and capital, they can tear down the walls of knowledge gatekeepers, so that Chinese students can have the same educational resources as British and American universities without leaving home. Gu Yue is committed to helping students build a knowledge structure, breaking the dilemma of many students only being able to access fragmented knowledge, and enabling knowledge to have the ability to reproduce. On this basis, reshape students' critical thinking system and use the critical thinking training methods of English and American universities to enable students to learn how to ask questions and independently seek directions and methods to solve problems. At the same time, through extensive academic reading and writing training, students can fundamentally understand the meanings and grammatical logic of words and phrases, and quickly improve their foreign language abilities. We not only provide basic humanities courses covering history, political philosophy, aesthetics, sociology, philosophy, and popular minority language courses around the world, but also equip each basic course with a series of tools to improve learning efficiency, such as efficient learning methodologies, academic research training tools, thinking model training tools, and scientific and efficient learning management systems.

Reading: 198 2024-11-09

Fengshen List (Fengshen List Large Model)

The "Fengshen List" is a long-term open source project jointly maintained by a team of engineers, researchers, and interns from the Cognitive Computing and Natural Language Center of the International Digital Economy Academy (IDEA) in the Greater Bay Area of Guangdong, Hong Kong, and Macau. The "Fengshen List" open-source system will re-examine the entire Chinese pre training big model open-source community, comprehensively promote the development of the entire Chinese big model community, and aim to become the infrastructure for Chinese cognitive intelligence. Ziya's Universal Large Model V1 is a large-scale pre trained model based on LLaMa with 13 billion parameters, capable of translation, programming, text classification, information extraction, summarization, copy generation, common sense Q&A, and mathematical calculations. At present, the Jiang Ziya general large model has completed a three-stage training process including large-scale pre training (PT), multi task supervised fine-tuning (SFT), and human feedback learning (RLHF). Ziya's universal big model can assist human-machine collaboration in multiple application scenarios such as digital humans, copywriting, chatbots, business assistants, Q&A, and code generation, improving work and production efficiency. The "Fengshen List" is the largest open-source pre training model system in Chinese, with over 98 open-source pre training models currently available. At present, the first Chinese Stable Diffusion and CLIP model have been open sourced. Models such as Erlangshen UniMC have won multiple championships on lists such as FewCLUE/ZeroCLUE. Accumulate data and computing power into pre trained models with cognitive abilities, with the goal of becoming a solid foundation for massive downstream tasks and various algorithm innovation research. The GTS model production platform focuses on the field of natural language processing, serving numerous business scenarios such as intelligent customer service, data semantic analysis, recommendation systems, etc. It supports tasks such as e-commerce comment sentiment analysis, scientific literature subject classification, news classification, content review, etc Under the GTS training system, only a small number of training samples need to be input, and there is no need for AI model training related knowledge to obtain a lightweight small model that can be directly deployed.

Tag: IDEA CCNL

Reading: 42 2024-11-09

Pangu Big Model - Huawei Cloud

Pangu Big Model is committed to deeply cultivating industries, creating industry models and capability sets in fields such as finance, government affairs, manufacturing, mining, meteorology, and railways. It combines industry knowledge and expertise with big model capabilities, reshaping thousands of industries and becoming an expert assistant for organizations, enterprises, and individuals. Pangu NLP Large Model The industry's first Chinese pre trained large model with over 100 billion parameters, utilizing big data pre training, combining rich knowledge from multiple sources, and continuously learning to absorb massive text data to continuously improve the model's performance. Pangu CV Large Model Based on massive image and video data and Pangu's unique technology, a visual foundation model is constructed to empower industry customers to achieve specific scene tasks by fine-tuning the model with a small amount of scene data. Pangu Multimodal Large Model Integrating language and visual cross modal information to achieve applications such as image generation, image understanding, 3D generation, and video generation, providing a cross modal capability foundation for industrial intelligent transformation. Pangu Prediction Model The Pangu prediction big model is designed for structured data, based on 10 categories and 2000 base model spaces. Through a two-step optimization strategy of model recommendation and fusion, a graph network architecture AI model is constructed. Pangu Scientific Computing Large Model The scientific computing big model is aimed at fields such as meteorology, medicine, water affairs, machinery, aerospace, etc., using AI data modeling and AI equation solving methods; Extract mathematical laws from massive data and encode differential equations using neural networks; Using AI models to solve scientific computing problems faster and more accurately.

Reading: 134 2024-11-09

MOSS

MOSS is the first conversational large-scale language model in China released by the Natural Language Processing Laboratory of Fudan University. On February 20, 2023, a reporter from Jiefang Daily and Shangguan News learned from the Natural Language Processing Laboratory of Fudan University that MOSS had been released by Professor Qiu Xipeng's team and invited the public to participate in the internal testing. On February 21st, the platform released an announcement thanking everyone for their attention, while also pointing out that MOSS is still a very immature model and there is still a long way to go before ChatGPT MOSS is an open-source conversational language model that supports both Chinese and English languages as well as multiple plugins. The MOSS Moon series model has 16 billion parameters and can run on a single A100/A800 or two 3090 graphics cards at FP16 precision, and on a single 3090 graphics card at INT4/8 precision. The MOSS base language model was pre trained on approximately 700 billion words in Chinese, English, and code, and subsequently fine tuned through dialogue instructions, plugin reinforcement learning, and human preference training, possessing multi round dialogue ability and the ability to use multiple plugins. Limitations: Due to the small amount of model parameters and the autoregressive generation paradigm, MOSS may still generate misleading replies containing factual errors or harmful content containing bias/discrimination. Please carefully identify and use the content generated by MOSS, and do not spread the harmful content generated by MOSS to the Internet. If any adverse consequences occur, the disseminator shall bear the responsibility.

Reading: 43 2024-11-09

Daguan Data - Cao Zhi GPT Large Language Model

AIGC text generation based on GPT language big model is a new creative approach. Daguan Data continues to explore and practice the development of enterprise service big language models. Based on long-term NLP practice and massive data accumulation, it has launched the domestic version of the ChatGPT model "Caozhi" system, which is the first to achieve product level application of AIGC intelligent writing in vertical fields Daguan Data is a national high-tech enterprise that provides intelligent text robots for enterprises in various scenarios. It has won the "Wu Wenjun Artificial Intelligence Award", the highest award in China's artificial intelligence field, the third batch of national specialized and new "little giant" enterprises of the Ministry of Industry and Information Technology, and has been selected as an international authoritative consulting group IDC innovator, KPMG Financial Technology Top 50, the world's top 30 best start-ups, China's top 50 artificial intelligence technology innovation and many other honorary qualifications. It has won the general champion of the Communist Youth League China Youth Innovation and Entrepreneurship Competition, the global champion of the ACMCIKM algorithm competition, and the global champion of the EMI Hackathon data competition. Daguan Data utilizes advanced technologies such as natural language processing (NLP), intelligent document processing (IDP), optical character recognition (OCR), robotic process automation (RPA), knowledge graph, etc. to provide intelligent text robot products for large enterprises and government agencies, including intelligent document review, office process automation, text recognition, enterprise level vertical search, intelligent recommendation, etc., allowing computers to assist manual completion of business process automation, greatly improving enterprise efficiency and intelligence level.

Reading: 66 2024-11-09

Recommend