by Jim Hall · Maxwell J. Peterson, Hanan C. Farah (eds.) Teach yourself how to write programs with the C programming language. We’ll start with […]
Month: November 2025
Dropbox Introduces Dash: A Self-Setup AI Workspace for Team Productivity
In an age where efficiency is paramount for small businesses, Dropbox has unveiled a game-changer: Dash, an AI-powered personalized workspace designed to streamline collaboration and […]
There’s No Such Thing as ‘Best Practices’ When It Comes to Family Enterprise Governance
Please enable JS and disable any ad blocker Read More
Adios, Windows: These alternatives make switching from Microsoft easy
Skip to content If you can’t install Windows 11 on your computer, you don’t have to discard your hardware after support for Windows 10 ends. […]
Best Home Pet Cams of 2025: Tested with Our Pets
Our Experts Article updated on November 2, 2025 at 4:00 AM PST Tyler Lacoma Editor / Home Security For more than 10 years Tyler has used […]
6 Startups That Reveal the Secret to Attracting Gen Z Customers
Please enable JS and disable any ad blocker Read More
Matched Clean Power Index
Many British consumers pay for 100% renewable electricity. But how much are they actually getting? The power sector has a dedicated system to track the […]
Tongyi DeepResearch – open-source 30B MoE Model that rivals OpenAI DeepResearch
GITHUB HUGGINGFACE MODELSCOPE SHOWCASE From Chatbot to Autonomous Agent We are proud to present Tongyi DeepResearch, the first fully open‑source Web Agent to achieve performance on par with OpenAI’s DeepResearch across a comprehensive suite of benchmarks. Tongyi DeepResearch demonstrates state‑of‑the‑art results, scoring 32.9 on the academic reasoning task Humanity’s Last Exam (HLE), 43.4 on BrowseComp and 46.7 on BrowseComp‑ZH in extremely complex information‑seeking tasks, and achieving a score of 75 on the user‑centric xbench‑DeepSearch benchmark, systematically outperforming all existing proprietary and open‑source Deep Research agents. Beyond the model, we share a complete and battle‑tested methodology for creating such advanced agents. Our contribution details a novel data synthesis solution applied across the entire training pipeline, from Agentic Continual Pre‑training (CPT) and Supervised Fine‑Tuning (SFT) for cold‑starting, to the final Reinforcement Learning (RL) stage. For RL, we provide a full‑stack solution, including algorithmic innovations, automated data curation, and robust infrastructure. For inference, the vanilla ReAct framework showcases the model’s powerful intrinsic capabilities without any prompt engineering, while the advanced Heavy Mode (test‑time‑scaling) demonstrates the upper limits of its complex reasoning and planning potential. Continual Pre‑training and Post‑training Empowered by Fully Synthetic Data Continual Pre‑training Data We introduce Agentic CPT to deep research agent training, creating powerful agentic foundation models for post‑training. We propose AgentFounder, a systematic and scalable solution for large‑scale data synthesis that creates a data flywheel with data from the post‑training pipeline. Data Reorganization and Question Construction. We continuously collect data from various sources, including documents, publicly available crawled data, knowledge graphs, and historical trajectories and tool invocation records (e.g., search results with links). As shown in the figure, these diverse data sources are restructured into an entity‑anchored open‑world knowledge memory. Based on randomly sampled entities and their corresponding knowledge, we generate multi‑style (question,answer) pairs. Action Synthesis. Based on diverse problems and historical trajectories, we construct first‑order action synthesis data and higher‑order action synthesis data. Our method enables large‑scale and comprehensive exploration of the potential reasoning‑action space within offline environments, thereby thereby eliminating the need for additional commercial tool API calls. Specifically, for the higher‑order action synthesis, we remodel trajectories as multi‑step decision‑making processes to enhance the model’s decision‑making capabilities. Post-training Data High-quality synthetic QA pairs We develop an end‑to‑end solution for synthetic data generation. This fully automated process requires no human intervention to construct super‑human quality datasets, designed to push the boundaries of AI agent performance. Through long‑term exploration and iteration‑from early methods like reverse‑engineering QA pairs from clickstreams (WebWalker) to the more systematic graph‑based synthesis (WebSailor and WebSailor‑V2), then the formalized task modeling (WebShaper)‑our approach ensures both exceptional data quality and massive scalability, breaking through the upper limits of model capabilities. To address complex, high‑uncertainty questions, we synthesize web‑based QA data through a novel pipeline. The process begins by constructing a highly interconnected knowledge graph via random walks and isomorphic tables towards tabular data fusion from real‑world websites , ensuring a realistic information structure. We then sample subgraphs and subtables to generate initial questions and answers. The crucial step involves intentionally increasing difficulty by strategically obfuscating or blurring information within the question. This practical approach is grounded in a complete theoretical framework, where we formally model QA difficulty as a series of controllable “atomic operations” (e.g., merging entities with similar attributes) on entity relationships, allowing us to systematically increase complexity. To further reduce inconsistencies between the organized information structure and the reasoning structure of QA, enable more controllable difficulty and structure scaling of reasoning, we proposed a formal modeling of the information‑seeking problem based on set theory. With this formalization, we developed agents that expands the problem in a controlled manner, and minimizes reasoning shortcuts and structural redundancy, leading to further improved QA quality. Moreover, this formal modeling also allows for efficient verification of QA correctness, effectively addressing the challenge of validating synthetic information‑seeking data for post‑training. Furthermore, we have developed an automated data engine to scale up the creation of PhD‑level research questions. This engine begins with a multi‑disciplinary knowledge base, generating “seed” QA pairs that require multi‑source reasoning. Each seed then enters a self‑guided loop of “iterative complexity upgrades”, where a question‑crafting agent is equipped with a powerful toolset including web search, academic retrieval, and a Python execution environment. In each iteration, the agent expands knowledge boundaries, deepens conceptual abstraction, and even constructs computational tasks, creating a virtuous cycle where the output of one round becomes the more complex input for the next, ensuring a controllable and systematic escalation of task difficulty. Unleashing Agent Capabilities […]
HyperRogue – A non-Euclidean roguelike
HyperRogue, the non-Euclidean roguelike, is a mind-melting masterpiece — Rock Paper Shotgun Current version: 12.0 (Jun 3, 2021) – get here or play online or […]
Qualcomm Snapdragon X Elite and X Plus Laptop Chips Explained
Qualcomm is one of the biggest under-the-hood names in mobile devices, producing the popular Snapdragon chips that power many of the best Android phones. Early […]