• Теги
    • избранные теги
    • Компании1675
      • Показать ещё
      Страны / Регионы438
      • Показать ещё
      Разное747
      • Показать ещё
      Формат32
      Международные организации20
      • Показать ещё
      Люди131
      • Показать ещё
      Издания62
      • Показать ещё
      Показатели49
      • Показать ещё
      Сферы1
22 февраля, 04:06

China May Soon Surpass America on the Artificial Intelligence Battlefield

Elsa Kania Security, Asia Swarms of unmanned aircraft could target high-value U.S. weapons platforms. The rapidity of recent Chinese advances in artificial intelligence indicates that the country is capable of keeping pace with, or perhaps even overtaking, the United States in this critical emerging technology. The successes of major Chinese technology companies, notably Baidu Inc., Alibaba Group and Tencent Holding Ltd.—and even a number of start-ups—have demonstrated the dynamism of these private-sector efforts in artificial intelligence. From speech recognition to self-driving cars, Chinese research is cutting edge. Although the military dimension of China’s progress in artificial intelligence has remained relatively opaque, there is also relevant research occurring in the People’s Liberation Army research institutes and the Chinese defense industry. Evidently, the PLA recognizes the disruptive potential of the varied military applications of artificial intelligence, from unmanned weapons systems to command and control. Looking forward, the PLA anticipates that the advent of artificial intelligence will fundamentally change the character of warfare, ultimately resulting in a transformation from today’s “informationized” (信息化) ways of warfare to future “intelligentized” (智能化) warfare. Read full article

21 февраля, 17:54

Sohu.com's (SOHU) Q4 Loss Widens, Revenues Fall Y/Y

Sohu.com Inc. (SOHU) reported fourth-quarter 2016 non GAAP loss of $1.79 per share, compared with loss of $1.68 reported in the prior year quarter.

14 февраля, 19:15

Before You Buy Social Media Stocks, Understand This

When smartphones went mainstream, social media stocks emerged. But for many of these companies, it’s been hard to stay in Wall Street’s good graces...

13 февраля, 08:02

How Chinese Internet Giant Baidu Uses AI And Machine Learning

Baidu, the Chinese internet giant and their counterpart to Google and Amazon, is using artificial intelligence, machine learning and deep learning effectively to drive real world performance and boost competitiveness.

Выбор редакции
11 февраля, 15:05

Ford инвестирует $1 млрд в компанию, разрабатывающую беспилотные автомобили

Американский автоконцерн Ford инвестирует $1 млрд в созданную недавно компанию Argo AI, занимающуюся разработкой беспилотных автомобилей, передает AP. Ее основатели — Брайан Салески и Питер Рэндар — до того, как объединили усилия в базирующемся в Питтсбурге стартапе, работали надо аналогичными прокетами в Google и Uber соответственно. Ford рассчитывает к 2021 году выпустить полностью беспилотный автомобиль.В августе Ford совместно с крупнейшим китайским интернет-поисковиком Baidu вложили $150 млн в разработку систем для беспилотных автомобилей. Инвестиции пойдут на развитие сенсоров и радарных систем, разрабатываемых специализированной американской компанией Velodyne LiDAR Inc.Подробнее об инвестиции читайте в материале «Ъ» «Ford и Baidu инвестируют в развитие беспилотных машин».

Выбор редакции
10 февраля, 15:27

Утечка: Nokia возродит легендарную N-серию

Компания HMD Global, которая занимается продвижением устройств Nokia на рынке, намерена возобновить выпуск устройств серии N. Китайский ресурс Baidu утверждает, что их могут представить на выставке MWC 2017 в Барселоне 26 февраля.Продолжатели традиций В 2011 году компания выпустила смартфон Nokia N9, в 2015 году – планшет Nokia N1. Так как после сделки с Microsoft Nokia не имела права выпускать смартфоны под своим брендом до конца 2016 года, больше попыток возродить серию N не предпринималось. Теперь же появилось изображение, которое косвенно подтверждает выпуск новых устройств Nokia N. По крайней мере, одно из них будет работать на базе процессора Qualcomm Snapdragon 6xx, а значит, это будет девайс среднего или высокого сегмента. Суммарно компания планирует поставить до 500 тыс. устройств серии N. Смотрите, какими они были: Кроме того, компания может выпустить современную версию Nokia 3310. Правда, о последователе «самого прочного телефона во Вселенной» пока ничего не известно.

Выбор редакции
10 февраля, 05:42

Baidu shaking up its medical business

China's major internet search engine Baidu Inc is restructuring and optimizing its medical business.

09 февраля, 23:51

What Does the Future Hold for Uber and Airbnb?

What does the future hold for Uber and Airbnb? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world.. Answer by Brad Stone, Senior Executive Editor at Bloomberg and author of

02 февраля, 18:20

What's in Store for Sohu.Com (SOHU) this Earnings Season?

Sohu.Com Inc. (SOHU) is expected to report fourth-quarter 2016 results sometime around Feb 6, 2017.

02 февраля, 13:06

Автокомпании создадут единый навигатор

Своё участие в NDS подтвердили BMW, Daimler, Hyundai, Volvo и Nissan, ряд картографических сервисов, включая TomTom, производителя программного обеспечения Elektrobit, поставщиков мультимедийного оборудования Harman и Clarion, а также компании Baidu и Neusoft. Стремление к стандартизации данных в ассоциации объясняют необходимостью создания точной навигации для беспилотных автомобилей.Формат Open Lane Model, разрабатываемый NDS, позволит всем заинтересованным сторонам использовать единые спецификации для навигационных систем, программ взаимодействия между автомобилями и дорожной инфраструктурой. По прогнозам NDS, единый стандарт данных позволит ускорить создание беспилотных автомобилей и снизит затраты на их разработку, потому что компаниям не придётся создавать свой софт.

01 февраля, 21:30

Искусственный интеллект обыграл игроков в покер

Как только дело доходит до покера, люди, конечно, чувствуют свое превосходство по сравнению с компьютерами.

01 февраля, 21:30

Искусственный интеллект обыграл игроков в покер

Как только дело доходит до покера, люди, конечно, чувствуют свое превосходство по сравнению с компьютерами.

01 февраля, 20:31

Google обогнал Apple по стоимости бренда

По оценке аналитиков, всего за год стоимость бренда Google выросла на $21,3 млрд (+24%), достигнув отметки в $109,47 млрд. В то же время бренд Apple потерял в цене сразу $38,78 млрд (–27%) и оценивается сегодня в $107,141 млрд. Эксперты Brand Finance считают, что это связано с излишней эксплуатацией доверия клиентов со стороны корпорации из Купертино. Кроме того, Apple вынужден конкурировать сегодня не только с традиционными соперниками, вроде Samsung, но и множеством китайских брендов. Помимо Apple, серьёзное падение стоимости бренда испытали HP (–$11,65 млрд), BT (–$6,96 млрд), Vodafone (–$5,99 млрд), Baidu (–$4,6 млрд) и Ericsson (–$4,15 млрд). В свою очередь, в число лидеров по росту стоимости вошли Amazon (+$36,75 млрд), Facebook (+$28 млрд), AT & T (+$27,1 млрд) и Alibaba (+$16,99 млрд). Несмотря на высокие темпы роста, Amazon по-прежнему занимает третью строчку рейтинга самых дорогих брендов с оценкой в $106,4 млрд. А вот Microsoft, чей бренд оценивается в $76,265 млрд, уступил четвёртую строчку рейтинга оператору связи AT & T, подорожавшему на $27,11 млрд (+45%). На шестой строке разместился южнокорейский производитель электроники Samsung с оценкой в $66,23 млрд (+13%).

30 января, 14:00

Deep Learning Will Radically Change the Ways We Interact with Technology

Even though heat and sound are both forms of energy, when you were a kid, you probably didn’t need to be told not to speak in thermal convection. And each time your children come across a stray animal, they likely don’t have to self-consciously rehearse a subroutine of zoological attributes to decide whether it’s a cat or a dog. Human beings come pre-loaded with the cognitive gear to simply perceive these distinctions. The differences appear so obvious, and knowing the differences comes so naturally to us, that we refer to it as common sense. Computers, in contrast, need step-by-step handholding—in the form of deterministic algorithms—to render even the most basic of judgments. Despite decades of unbroken gains in speed and processing capacity, machines can’t do what the average toddler does without even trying. That is—until now. Over the last half-dozen years, deep learning, a branch of artificial intelligence inspired by the structure of the human brain, has made enormous strides in giving machines the ability to intuit the physical world. At Facebook’s AI lab, they’ve built a deep learning system capable of answering simple questions to which it had never previously been exposed. The Echo, Amazon’s smart speaker, uses deep learning techniques. Three years ago, Microsoft’s chief research officer impressed attendees at a lecture in China with a demonstration of deep learning speech software that translated his spoken English into Chinese, then instantly delivered the translation using a simulation of his voice speaking Mandarin—with an error rate of just 7%. It now uses the technology to improve voice search on Windows mobile and Bing. The most powerful tech companies in the world have been quietly deploying deep learning to improve their products and services, and none has invested more than Google. It has “bet the company” on AI, says the New York Times, committing huge resources and scooping up many of the leading researchers in the field. And its efforts have borne fruit. A few years ago, a Google deep learning network was shown 10 million unlabeled images from YouTube, and proved to be nearly twice as accurate at identifying the objects in the images (cats, human faces, flowers, various species of fish, and thousands of others) as any previous method. When Google deployed deep learning on its Android voice search, errors dropped by 25% overnight. At the beginning of this year, another Google deep learning system defeated one of the best players of Go—the world’s most complex board game. This is only the beginning. I believe that over the next few years start-ups and the usual big tech suspects will use deep learning to upgrade a wide suite of existing applications, and to create new products and services. Entirely new business lines and markets will spring up, which will, in turn, give rise to still more innovation. Deep learning systems will become easier to use and more widely available. And I predict that deep learning will change the way people interact with technology as radically as operating systems transformed ordinary people’s access to computers. Deep Learning Historically, computers performed tasks by being programmed with deterministic algorithms, which detailed every step that had to be taken. This worked well in many situations, from performing elaborate calculations to defeating chess grandmasters. But it hasn’t worked as well in situations where providing an explicit algorithm wasn’t possible—such as recognizing faces or emotions, or answering novel questions. Trying to approach those challenges by hand-coding the myriad attributes of a face or phoneme was too labor-intensive, and left machines unable to process data that didn’t fit within the explicit parameters provided by the programmers. Think of the difference between modern voice-assistants like Siri or Alexa, which allow you to ask for things in various ways using natural language, vs. automated phone menu systems, which only perform if you use the specific set of non-negotiable words that they were programmed to understand. By contrast, deep learning-based systems make sense of data for themselves, without the need of an explicit algorithm. Loosely inspired by the human brain, these machines learn, in a very real sense, from their experience. And some are now about as good at object and speech recognition as people. So how does deep learning work? Deep learning systems are modeled after the neural networks in the neocortex of the human brain, where higher-level cognition occurs. In the brain, a neuron is a cell that transmits electrical or chemical information. When connected with other neurons, it forms a neural network. In machines, the neurons are virtual—basically bits of code running statistical regressions. String enough of these virtual neurons together and you get a virtual neural network. Think of every neuron in the network below as a simple statistical model: it takes in some inputs, and it passes along some output.   For a neural network to be useful, though, it requires training. To train a neural network, a set of virtual neurons are mapped out and assigned a random numerical “weight,” which determines how the neurons respond to new data (digitized objects or sounds). Like in any statistical or machine learning, the machine initially gets to see the correct answers, too. So if the network doesn’t accurately identify the input – doesn’t see a face in an image, for example — then the system adjusts the weights—i.e., how much attention each neuron paid to the data—in order to produce the right answer. Eventually, after sufficient training, the neural network will consistently recognize the correct patterns in speech or images. The idea of artificial neurons has been around for at least 60 years, when, in the 1950s, Frank Rosenblatt built a “perceptron” made of motors, dials, and light detectors, which he successfully trained to tell the difference between basic shapes. But early neural networks were extremely limited in the number of neurons they could simulate, which meant they couldn’t recognize complex patterns. Three developments in the last decade made deep learning viable. First, Geoffrey Hinton and other researchers at the University of Toronto developed a breakthrough method for software neurons to teach themselves by layering their training. (Hinton now splits his time between the University of Toronto and Google.) A first layer of neurons will learn how to distinguish basic features, say, an edge or a contour, by being blasted with millions of data points. Once the layer learns how to recognize these things accurately, it gets fed to the next layer, which trains itself to identify more complex features, say, a nose or an ear. Then that layer gets fed to another layer, which trains itself to recognize still greater levels of abstraction, and so on, layer after layer—hence the “deep” in deep learning—until the system can reliably recognize very complex phenomenon, like a human face.   The second development responsible for recent advancements in AI is the sheer amount of data that is now available. Rapid digitization has resulted in the production of large-scale data, and that data is oxygen for training deep learning systems. Children can pick something up after being shown how to do it just a few times. AI-powered machines, however, need to be exposed to countless examples. Deep learning is essentially a brute-force process for teaching machines how a thing is done or what a thing is. Show a deep learning neural network 19 million pictures of cats and probabilities emerge, inclinations are ruled out, and the software neurons eventually figure out what statistically significant factors equate to feline. It learns how to spot a cat. That’s why Big Data is so important—without it, deep learning just doesn’t work. Finally, a team at Stanford led by Andrew Ng (now at Baidu) made a breakthrough when they realized that graphics processing unit chips, or GPUs, which were invented for the visual processing demands of video games, could be repurposed for deep learning. Until recently, typical computer chips could only process one event at a time, but GPUs were designed for parallel computing. Using these chips to run neural networks, with their millions of connections, in parallel sped up the training and abilities of deep learning systems by several orders of magnitude. It made it possible for a machine to learn in a day something that had previously taken many weeks. The most advanced deep learning networks today are made up of millions of simulated neurons, with billions of connections between them, and can be trained through unsupervised learning. It is the most effective practical application of artificial intelligence that’s yet been devised. For some tasks, the best deep learning systems are pattern recognizers on par with people. And the technology is moving aggressively from the research lab into industry. Deep Learning OS 1.0 As impressive as the gains from deep learning have been already, these are early days. If I analogize it to the personal computer, deep learning is in the green-and-black-DOS-screen stage of its evolution. A great deal of time and effort, at present, is being spent doing for deep learning—cleaning, labelling, and interpreting data, for example—rather than doing with deep learning. But in the next couple of years, start-ups and established companies will begin releasing commercial solutions for building production-ready deep learning applications. Making use of open-source frameworks such as TensorFlow, these solutions will dramatically reduce the effort, time, and costs of creating complex deep learning systems. Together they will constitute the building blocks of a deep learning operating system. A deep learning operating system will permit the widespread adoption of practical AI. In the same way that Windows and Mac OS allowed regular consumers to use computers and SaaS gave them access to the cloud, tech companies in the next few years will democratize deep learning. Eventually, a deep learning OS will allow people who aren’t computer scientists or natural language processing researchers to use deep learning to solve real-life problems, like detecting diseases instead of identifying cats. The first new companies making up the deep learning operating system will be working on solutions in data, software, hardware. Data. Getting good quality large scale data is the biggest barrier to adopting deep learning. But both service shops and software platforms will arise to deal with the data problem. Companies are already creating internal intelligent platforms that assist humans to label data quickly. Future data labeling platforms will be embedded in the design of the application, so that the data created by using a product will be captured for training purposes. And there will be new service-based companies that will outsource labeling to low-cost countries, as well as create labeled data through synthetic means. Software. There are two main areas here where I see innovation happening: 1) The design and programming of neural networks. Different deep learning architectures, such as CNNs and RNNs, support different types of applications (image, text, etc.). Some use a combination of neural network architectures. As for training, many applications will use a combination of machine learning algorithms, deep learning, reinforcement learning, or unsupervised learning for solving different sub-parts of the application. I predict that someone will build a machine learning design engine solution, which will examine an application, training data set, infrastructure resources, and so on, and recommend the right architecture and algorithms to be used. 2) A marketplace of reusable neural network modules. As described above, different layers in a neural network learn different concepts and then build on each other. This architecture naturally creates opportunity to share and reuse trained neural networks. A layer of virtual neurons that’s been trained to identify an edge, on its way up to recognizing the face of cat, could also be repurposed as the base layer for recognizing the face of a person. Already, Tensorflow, the most popular deep learning framework, supports reusing an entire subgraph component. Soon, the community of machine learning experts contributing open source modules will create the potential for deep learning versions of GitHub and StackOverflow. Hardware. Finding the optimal mix of GPUs, CPUs, cloud resources; determining the level of parallelization; and performing cost analyses are complex decisions for developers. This creates an opportunity for platform and service-based companies to recommend the right infrastructure for training tasks. Additionally, there will be companies that provide infrastructure services—such as orchestration, scale-out, management, and load balancing—on specialized hardware for deep learning. Moreover, I expect incumbents as well as start-ups to launch their own deep learning-optimized chips. These are just some of the possibilities. I’m certain there are many more lurking in other entrepreneurial minds, because the promise of this technology is immense. We are beginning to build machines that can learn for themselves and that have some semblance of sensible judgment.

27 января, 15:25

4 Models for Using AI to Make Decisions

Charismatic CEOs enjoy leading and inspiring people, so they don’t like delegating critical business decisions to smart algorithms. Who wants clever code bossing them around? But that future’s already arrived. At some of the world’s most successful enterprises — Google, Netflix, Amazon, Alibaba, Facebook — autonomous algorithms, not talented managers, increasingly get the last word. Elite MBAs (Management by Algorithm) are the new normal. Executives dedicated to data-driven excellence accept the reality that smart algorithms need greater autonomy to succeed. Empowering algorithms is now as organizationally important as empowering people. But without clear lines of authority and accountability, dual empowerment guarantees perpetual conflict between human and artificial intelligence. Computational autonomy requires that C-suites revisit the hows and whys of delegation. CEOs need to clarify when talented humans must defer to algorithmic judgment. That’s hard. The most painful board conversations that I hear about machine learning revolve around how much power and authority super-smart software should have. Executives who wouldn’t hesitate to automate a factory now flinch at the prospect of deep-learning algorithms dictating their sales strategies and capex. The implications of success scare them more than the risk of failure. “Does this mean that all our procurement bids will be determined by machine?” asked one incredulous CEO of a multibillion euro business unit. Yes, that’s exactly what it meant. His group’s data science, procurement, and supply chain teams crafted algorithmic ensembles that, by all measures and simulations, would save hundreds of millions. Even better, they would respond 10 times faster to market moves than existing processes while requiring minimal human intervention. Top management would have to trust its computationally brilliant bidding software. That was the challenge. But the CEO wouldn’t — or couldn’t — pull the autonomy trigger. “You need a Chief AI Officer,” Baidu chief scientist Andrew Ng told Fortune at January’s Consumer Electronics Show. (He explained why he thinks so in a recent HBR article.) Perhaps. But CEOs serious about confronting autonomy opportunity and risk should consider four proven organizational options. These distinct approaches enjoy demonstrable real-world success. The bad news: Petabytes of new data and algorithmic innovation assure that “autonomy creep” will relentlessly challenge human oversight from within. The Autonomous/Autonomy Advisor McKinsey, Bain, and BCG are the management models here. Autonomous algorithms are seen and treated as the best strategic advisors you’ll ever have, but they’re ones that’ll never go away. They’re constantly driving data-driven reviews and making recommendations. They both take initiative on what to analyze and brief top management with what they find. But only the human oversight committee approves what gets “autonomized” and how it is implemented. In theory, the organizational challenges of algorithmic autonomy map perfectly to which processes or systems are being made autonomous. In reality, “handoffs” and transitions prove to be significant operational problems. The top-down approach invariably creates interpersonal and inter-process frictions. At one American retailer, an autonomous ensemble of algorithms replaced the entire merchandising department. Top management told store managers and staff to honor requests and obey directives from their new “colleagues”; the resentment and resistance were palpable. Audit software and human monitors were soon installed to assure compliance. In this model, data scientists are interlocutors and ambassadors between the autonomy oversight committee and the targets of implementation. They frequently find the technologies are less of a hassle than the people. They typically become the punching bags and shock absorbers for both sides. They’re the ones tasked with blocking efforts to game the algorithms. Their loyalty and accountability belongs to top management. The Autonomous Outsourcer “Accenturazon” — part Accenture, part Amazon Web Services — is the managerial model here. Business process outsourcing becomes business process algorithmization. The same sensibilities and economic opportunities that make outsourcing appealing become managerial principles for computational autonomy. That means you need crystal-clear descriptions and partitioning of both tasks to be performed and desired deliverables. Ambiguity is the enemy; crisply defined service level agreements and explicit KPI accountability are essential. Process and decision owners determine the resource allocations and whether autonomy should lead to greater innovation, optimization, or both. Predictability and reliability matter most, and autonomy is a means to that end. As with traditional outsourcing, flexibility, responsiveness, and interoperability invariably prove problematic. The emphasis on defined deliverables subverts initiatives that might lead to autonomous-driven new value creation or opportunity exploration. The enterprise builds up a superior portfolio of effective autonomous ensembles but little synergy between them. Smarter C-suites architect their autonomous Accenturazonic initiatives with interoperability in mind. Data scientists in business process algorithmization scenarios are project managers. They bring technical coherence and consistency to SLAs while defining quality standards for data and algorithms alike. They support the decision and process owners responsible for autonomy-enabled outcomes. World-Class Challenging/Challenged Autonomous Employee Even the most beautiful of minds can come with intrinsic limitations, and in that way algorithms resemble eccentric geniuses. Can typical managers and employees effectively collaborate with undeniably brilliant but constrained autonomous entities? In this enterprise environment, smart software is seeded wherever computational autonomy can measurably supplement, or supplant, desired outcomes. The firm effectively trains its people to hire and work with the world’s best and brightest algorithms. The software is treated as a valued and valuable colleague that, more often than not, comes up with a right answer, if not the best one. Versions of this are ongoing at companies such as Netflix and Alibaba. But I cannot speak too highly of Steve Levy’s superb Backchannel discussion of how Google has committed to becoming a “machine learning first” enterprise. “The machine learning model is not a static piece of code  —  you’re constantly feeding it data,” says one Google engineer. “We are constantly updating the models and learning, adding more data, and tweaking how we’re going to make predictions. It feels like a living, breathing thing. It’s a different kind of engineering.” Comingling person/machine autonomy necessarily blurs organizational accountability. In such fast-changing learning environments, project and program managers can’t always know whether they will get better results from retraining people or retraining algorithms. That said, a culture of cocreation and collaboration becomes the only way to succeed. Data scientists here facilitate. They’re analogous to autonomous resources, as opposed to human resources, departments. They do things like write chatbots and adopt Alexa-like interfaces to make collaboration and collegiality simpler and easier. They look to minimize discrimination, favoritism, and tension in person/machine relationships. C-suites depend on them to understand the massive cultural transformation pervasive autonomy means. All-In Autonomy Renaissance Technologies and other, even more secretive investment funds are the management models here. These organizations are fully committed to letting algorithmic autonomy take the enterprise to new frontiers of innovation, profitability, and risk. Their results should humble those who privilege human agency. Human leadership defers to demonstrable algorithmic power. One quant designer at a New York hedge fund (that trades more in a week than a Fortune 250 company makes in a year) confided: “It took years for us to trust the algorithms enough to resist the temptation to override them….There are still [occasional] trades we won’t make and [not doing them] almost always costs us money.” Firms look to leverage, amplify, and network autonomy into self-sustaining competitive advantage. They use machine learning software to better train machine learning software. Machine learning algorithms stress-test and risk-manage other machine learning algorithms. Autonomy is both the organizational and the operational center of gravity for innovation and growth. People are hired and fired based on their abilities to push the algorithmic boundaries of successful autonomy. Leadership in these organization demands humility and a willingness to convert trust in numbers into acts of faith. Academic computational finance researchers and fund managers alike tell me their machines frequently make trades and investments that the humans literally and cognitively do not understand. One of the hottest research areas in deep learning is crafting meta-intelligence software that generate rationales and narratives for explaining data-driven machine decision to humans. Risk management and the imperative to acquire accessible human understanding of complex autonomy dominates data science for all-in enterprises. Admittedly, these four managerial models deliberately anthropomorphize autonomous algorithms. That is, the software is treated not as inanimate lines of code but as beings with some sort of measurable and accountable agency. In each model C-suites rightly push for greater transparency and accessibility into what makes them tick. Greater oversight will lead to greater insight as algorithmic autonomy capabilities advance. CEOs and their boards need to monitor that closely. They also need to promote use cases, simulations, and scenarios to stress-test the boundary conditions for their autonomous ensembles. CEOs and executive leaderships should be wary of mashing up or hybridizing these separate approaches. The key to making them work is to build in accountability, responsibility, and outcomes from the beginning. There must be clarity around direction, delegation, and deference. While that maxim is based on anecdotal observation and participation, not statistical analysis, never underestimate how radical shifts in organizational power and influence can threaten self-esteem and subvert otherwise professional behavior. That’s why CEOs should worry less about bringing autonomy to heel than making it a powerful source and force for competitive advantage. Without question, their smartest competitors will be data-driven autonomous algorithms.

26 января, 23:34

Harman (HAR) Q2 Earnings, Revenues Top Estimates, Grow Y/Y

Harman International Industries Inc. (HAR) reported second-quarter fiscal 2017 results wherein non-GAAP earnings per share of $2.22 and revenues of nearly $1.947 billion easily beat the respective Zacks Consensus Estimates.

Выбор редакции
21 января, 03:00

Китайская компания Baidu запустила AR-лабораторию

Компания из Китая Baidu запустила в ход лабораторию дополненной реальности.

20 января, 09:38

Процесс развития ИИ уже не остановить

Основатель сервиса для онлайн-обучения Coursera, подразделения глубокого обучения Google Brain и руководитель команды по развитию технологий искусственного интеллекта в Baidu Эндрю Ын побеседовал с изданием The Ringer о том, как пользователи взаимодействуют с ассистентами, и о том, как работникам адаптироваться к новой экономике. Редакция vc.ru выбрала основные мысли интервью.

19 января, 19:01

Chinese investors buy tech media IDG

INTERNATIONAL Data Group, the owner of PCWorld magazine and market researcher IDC, yesterday said it is being acquired by China Oceanwide Holdings Group and IDG Capital, the investment management company

19 января, 18:07

«Процесс развития ИИ уже не остановить» — Основатель Coursera и Google Brain Эндрю Ын о глубоком обучении и роли онлайн-образования в новой экономике

Основатель сервиса для онлайн-обучения Coursera, подразделения глубокого обучения Google Brain и руководитель команды по развитию технологий искусственного интеллекта в Baidu Эндрю Ын побеседовал с изданием The Ringer о том, как пользователи взаимодействуют с ассистентами, и о том, как работникам адаптироваться к новой экономике.Редакция vc.ru выбрала основные мысли интервью.

Выбор редакции
25 августа 2016, 08:37

В Сингапуре запустили беспилотное такси. Бесплатно

Сингапурская компания nuTonomy запустила первое беспилотное такси, при этом сервис впервые в мире предоставляется бесплатно.