• Теги
    • избранные теги
    • Компании2120
      • Показать ещё
      Разное1133
      • Показать ещё
      Международные организации65
      • Показать ещё
      Страны / Регионы441
      • Показать ещё
      Формат60
      Издания62
      • Показать ещё
      Показатели66
      • Показать ещё
      Люди95
      • Показать ещё
28 апреля, 18:02

Microsoft (MSFT) Tops Q3 Earnings and Revenue Estimates

Microsoft Corporation (MSFT) reported third-quarter fiscal 2017 earnings of 73 cents per share, which beat the Zacks Consensus Estimate by 4 cents.

28 апреля, 13:03

Нейросети для транзакций: как на деле работают «большие данные» в российских банках?

С развитием машинного обучения и нейронных сетей банки получили инструменты для анализа информации о клиентах из социальных сетей. Это вызвало дискуссии об этичности использования личных данных без разрешения

28 апреля, 13:00

Smart Cities Are Going to Be a Security Nightmare

In the fictional world of the video game Watch Dogs, you can play a hacktivist who takes over the central operating system of a futuristic, hyper-connected Chicago. With control over the city’s security system, you can spy on residents using surveillance cameras, intercept phone calls, and cripple the city’s critical infrastructure, unleashing a vicious cyberattack that brings the Windy City to its knees. Watch Dogs is just a game, but it illustrates a possible “what if” scenario that could happen in today’s increasingly smart cities. Advancements in artificial intelligence and Internet of Things (IoT) connected devices have made it possible for cities to increase efficiencies across multiple services like public safety, transportation, water management and even healthcare. An estimated 2.3 billion connected things will be used in smart cities this year, according to Gartner, Inc., the technology research and advisory company. That would represent a 42% increase in the number of connected devices since 2016. But the rise of digital connectivity also exposes a host of vulnerabilities cybercriminals are lining up to exploit. On April 8, hackers set off 156 emergency sirens in Dallas, Texas, disrupting residents and overwhelming 911 operators throughout the day. The number of attacks on critical infrastructure jumped from under 200 in 2012 to almost 300 attacks in 2015. As smart cities move from concept to reality, securing their foundation will become a top priority to ensure the safety of our digitally connected communities. Insight Center Getting Cybersecurity Right Sponsored by Accenture Safeguarding your company in a complex world. Simply put, smart cities rely on interconnected devices to streamline and improve city services based on rich, real-time data. These systems combine hardware, software, and geospatial analytics to enhance municipal services and improve an area’s livability. Inexpensive sensors, for example, can reduce the energy wasted in street lights or regulate the flow of water to better preserve resources. Smart cities rely on accurate data in order to properly function. Information that has been tampered with can disrupt operations — and constituents’ lives — for days. Several cities have adopted smart technologies, applying artificial intelligence to accelerate their transition into the future. In Barcelona, smart water meter technology helped the city save $58 million annually. In South Korea, one city cut building operating costs by 30% after implementing smart sensors to regulate water and electricity usage. With the global IoT footprint expected to surpass 50 billion connected devices by 2020, urban communities will need to strengthen existing cybersecurity protocols and disaster recovery methods to counter hackers searching for opportunities to wreak havoc. As smart city infrastructure proliferates, the stakes for protecting these digital foundations will only get higher. While investment in smart technology has gone up, many of these innovations are deployed without robust testing and cybersecurity is often neglected. For example, cities currently using a supervisory control and data acquisition (SCADA) system, are particularly susceptible to frequent hacks due to poor security protocols. Though SCADA systems control large-scale processes and unify decentralized facilities, they lack cryptographic security and authentication factors. If a hacker targets a city’s SCADA system, they could threaten public health and safety, and shut down multiple city services from a single entry point. Simple computer bugs can also cause significant glitches in control systems, leading to major technical problems for cities. Once hackers invade smart city control systems, they can send manipulated data to servers to exploit and crash entire data centers. This is how hackers gained access to an Illinois water utility control system in 2011, destroying a water pump that serviced 2,200 customers. Not only do these breaches disrupt daily operations for residents, they can be costly to remedy. A hypothetical hack that triggers a blackout in North America is estimated to leave 93 million people without power and could cost insurers anywhere from $21 billion to $71 billion in damages. The inevitability of cyberattacks is a lesson the private sector has learned the hard way. As cities adopt smart initiatives, they’d be wise to make data security a priority from the outset. In addition to physically securing facilities controlling power, gas and water, city planners should also implement fail safes and manual overrides in all systems and networks. This includes forcibly shutting down potentially hacked systems until security experts have the opportunity to resolve vulnerability issues. Encrypting sensitive data and deploying network intrusion mechanisms that regularly scan for suspicious activity can also protect against hackers trying to breach control systems remotely. Smart cities can increase productivity and efficiencies for citizens, but they have a serious problem when security is underestimated. As local governments pursue smart initiatives, realizing the full potential of these digitally connected communities starts with implementing cybersecurity best practices from the ground up.

27 апреля, 13:00

Why Do IoT Companies Keep Building Devices with Huge Security Flaws?

Earlier this year an alarming story hit the news: Hackers had taken over the electronic key system at a luxury hotel in Austria, locking guests out of their rooms until the hotel paid a ransom. It was alarming, of course, for the guests and for anyone who ever stays at a hotel. But it came as no surprise to cybersecurity experts, who have been increasingly focused on the many ways in which physical devices connected to the internet, collectively known as the internet of things (IoT), can be hacked and manipulated. (The hotel has since announced that it is returning to using physical keys.) It doesn’t take a great leap to imagine an IoT hostage scenario, or all of the other ways hackers could wreak havoc with the networked objects we use every day. Smart devices permeate our homes and offices. Smoke detectors, thermostats, sprinklers, and physical access controls can be operated remotely. Virtual assistants, televisions, baby monitors, and children’s toys collect and send data to the cloud. (One of the latest toy breaches, involving CloudPets teddy bears, is now the subject of a congressional inquiry.) Some smart technologies can save lives, such as medical devices that control intravenous drug doses or remotely monitor vital signs. Insight Center Getting Cybersecurity Right Sponsored by Accenture Safeguarding your company in a complex world. The problem is that many IoT devices are not designed or maintained with security as a priority. According to a recent study by IBM Security and the Ponemon Institute, 80% of organizations do not routinely test their IoT apps for security vulnerabilities. That makes it a lot easier for criminals to use IoT devices to spy, steal, and even cause physical harm. Some observers attribute the failure to the IoT gold rush, and are calling for government to step in to regulate smart devices. When it comes to cybersecurity, however, regulation can be well-intentioned but misguided. Security checklists that are drafted by slow-moving government bodies can’t keep up with evolving technology and hacking techniques, and compliance regimes can divert resources and give a false sense of security. Add up all the different federal, state, and international agencies that claim a piece of the regulatory pie, and you get a mishmash of overlapping requirements that can confuse and constrain companies — but leave hackers plenty of room to maneuver. The Obama administration pushed regulatory proposals for cybersecurity infrastructure in its early years, but eventually pivoted to a more effective risk-management approach. This was embodied by the widely acclaimed National Institute of Standards and Technology (NIST) Cybersecurity Framework, which was developed in collaboration with the industry and provides risk-based guidance and best practices that can be adapted to an organization of any size or profile. Early signs are that the Trump administration plans to continue the NIST approach. A wise next step would be to build on that success and develop a similar framework for IoT. Rather than trying to dictate specific controls for a diverse, growing set of technologies, the framework could harmonize international best practices for IoT and help companies prioritize the most important security strategies for their organization. This is essentially what the bipartisan Commission on Enhancing National Cybersecurity recommended to the new administration in December. A framework could also serve as a much-needed coordination point for a number of fragmented IoT efforts currently under way in federal agencies. It would be a mistake, however, for the IoT industry to wait for governments to step in. The problem is urgent, and it will become even more so as new IoT attacks come to light, as they certainly will. IoT providers can demonstrate that they are serious about security by taking some basic steps. First, security and privacy should be incorporated into design and development. Most security testing of IoT devices occurs in the production phase, when it is too late to make significant changes. Planning and investment up front can go a long way. For example, many IoT devices share default user names and passwords that are well known and can be found with a quick Google search. Because most consumers do not change those settings, products should be designed to ship with unique credentials, or require users to set new credentials upon first use. This would thwart the easiest and most widespread method of compromising IoT devices. Just last fall, hackers used known factory credentials to infect thousands of DVRs and webcams with the Mirai botnet, which was used to cause massive internet outages. Second, IoT devices should be able to receive software updates for their entire life span. New software vulnerabilities are often discovered after a product is released, making security patching critical to defend against threats. If there are limits to the length of time that updates can reasonably be provided, then the product should be clearly labeled with an “expiration date,” past which security will no longer be maintained. Third, transparency to consumers should be improved. Unlike mobile phones and computers, IoT devices often operate without human supervision or visibility. Many of these objects lack screens to display messages. As with other types of product recalls, owners need to be notified when the device has a security issue and told how to apply security updates. When IoT devices are resold, there should be a simple way to conduct a factory reset to erase data and credentials. For example, IBM Security recently demonstrated how sellers of used cars can retain access to vehicles’ remote functions (like geolocation) without buyers being aware. It is still early days in the world of IoT, but it’s a fast-moving world, with billions of new devices being connected every year. And the window on building a trustworthy ecosystem is closing. Will others follow the Austrian hotel’s example, disconnecting when their devices, and their trust, are breached? The IoT industry should not wait to find out. We can either invest now in securing that trust, and safely enjoy the benefits of this remarkable technology, or we can expect hackers to wreak more havoc and governments to intervene in a heavy-handed manner.

Выбор редакции
26 апреля, 19:03

CMOs' Evolving Analytics Imperative: A Q&A With Accenture's Matt Gay

In an effort to wade through the noise and get clarity around the evolving analytics imperative for CMOs, I’m taking questions to key participants in the advertising and marketing industry, those whose organizations sit at the crossroads of problem and solution.

Выбор редакции
26 апреля, 14:00

The C-Suite and IT Need to Get on the Same Page on Cybersecurity

A recently published global survey of C-Suite level executives and IT Decision Makers (ITDMs) revealed a large gap in assessments of cyber threats, costs and areas of responsibilities. Among the most significant disconnects: 80% of the executives surveyed in the U.S. believe cybersecurity to be a significant challenge facing their business, while only 50% of ITDMs agree. ITDMs estimated the average cost of a cyber breach at $27.2 million, much higher than the average $5.9 million cited by executives. 50% of the executives surveyed believe the reason why an attack on their organization would succeed would be due to human error of employees, compared to 31% of ITDMs. The research shows there is a lack of understanding when it comes to the cost of a successful breach, which many underestimate. It isn’t just about what the thieves get away with. A successful cyber attack can have far reaching implications such as impacting share price, lost business, fines — even a failed strategic investment or merger. Insight Center Getting Cybersecurity Right Sponsored by Accenture Safeguarding your company in a complex world. Gaps between the strategic visions of the C-suite and the real-world experiences of IT specialists should not be a surprise. They may think differently about the nature of cyber risk and of the way threats translate into business and technological risks. This is largely due to their priorities — C-suite executives have responsibility for mitigating business risk, while IT delivers the technological support that drives the business. The most common area of agreement between these key groups is that danger lurks in cyberspace. Sixty percent of C-Suite executives and 66% of ITDMs think their businesses will be targeted for a cyber attack in the next 12 months, and both groups report that they expect the frequency and severity of attacks to increase. This is confirmation that the threat from cyber attack is now just part of the day-to-day reality of doing business in a hyper-connected world. Organizations that take cyber security seriously should implement best practices that will help reduce the disconnects and ensure effective cyber risk management. Among them: Include the C-suite in incident response table-top exercises so they fully understand their roles, and all the possible costs of an attack. Having firsthand experience of an attack, even a simulated one, means the C-suite will gain awareness that’s vital to driving a top-down security-focused culture. Educate both groups — and all employees — on the need to understand their organization’s cyber exposure and how attackers can exploit information they gather from reconnaissance efforts to craft targeted attacks. It should be more than a theoretical exercise, using real examples of what can be found about the organization. For example, customer details including login credentials and account information is often for sale on the dark web. This information can be leveraged by attackers to create synthetic IDs that are often used to enable cyber crime. Introduce a forward looking, strategic approach to cyber defense to deal with the reality of the likelihood of cyber attacks. This strategy must capture an appropriate balance between tools, people and processes. There is no silver bullet when it comes to protecting critical assets and technology cannot be counted on alone. You can have the latest and greatest technology in place, but it can still be vulnerable if you don’t have the right people with the correct skills as well. Furthermore, operating procedures need to be well defined and expressed to get the most from the technology. For example, security teams need to have enough bandwidth to investigate alerts that are being generated – and simply turning up the alerting threshold and thereby reducing the number of alerts is not a good way to deal with a lack of bandwidth. Exploring the use of automation, where possible, in operational processes is becoming a focus as security professionals look to maximize what they can do with existing resources. To triage efficiently, security teams need as much context as possible to ascertain if an alert is important or not. This context includes internal as well as external data, such as threat-intelligence, which can provide broader context on attack groups’ tools, tactics and procedures. With the continued risk of ransomware attacks, IT teams must implement an appropriate back up strategy to help mitigate the impact of these attacks. If valuable data is lost because it was encrypted by ransomware, backups can be used to restore the data without the need to pay the ransom. Data needs to be stored in protected locations to ensure that it isn’t encrypted during an attack. This back up strategy needs to be part of an organization’s broader Incident Response plan, which should capture in detail what would be done to contain and then recover from a ransomware attack. Assume that at some point your organization will be breached. Review your ability to detect and respond to threats inside your network and on your endpoints. New security initiatives should focus on reducing the time it takes to discover and then contain and remediate unwanted activity on your systems. It is now broadly accepted by security thought leaders that only looking for patterns of nefarious activity derived from previously seen attacks is not sufficient to detect well-crafted targeted attacks that are likely not to have been seen before. To reduce the time it takes to detect unwanted activities in IT systems, organizations now need to evaluate the use of additional detection techniques. For example, hackers often establish command and control channels to direct their attacks. Finding these channels is crucial to uncovering unwanted activities. As the threats evolve, it isn’t just about tracking known threats, but taking a proactive approach and working to understand new, unknown cyber threats.

26 апреля, 07:01

На грани нервного срыва: как Арианна Хаффингтон зарабатывает на глобальной эпидемии стресса

В августе 2016 года основательница The Huffington Post покинула пост главного редактора и запустила новый проект о здоровом образе жизни Thrive Global. Такой смене карьеры поспособствовал нервный срыв с драматическим финалом. В интервью Forbes Woman Хаффингтон рассказала, как ее жизнь вышла из-под контроля

25 апреля, 15:00

8 Ways Governments Can Improve Their Cybersecurity

It’s hard to find a major cyberattack over the last five years where identity — generally a compromised password — did not provide the vector of attack. Target, Sony Pictures, the Democratic National Committee (DNC) and the U.S. Office of Personnel Management (OPM) each were breached because they relied on passwords alone for authentication. We are in an era where there is no such thing as a “secure” password; even the most complex password is still a “shared secret” that the application and the user both need to know, and store on servers, for authentication. This makes passwords inherently vulnerable to a myriad of attack methods, including phishing, brute force attacks and malware. Insight Center Getting Cybersecurity Right Sponsored by Accenture Safeguarding your company in a complex world. The increasing use of phishing by cybercriminals to trick users into divulging their password credentials is the most alarming — a recent report from the Anti-Phishing Working Group (APWG) found that 2016 was the worst year in history for phishing scams, with the number of attacks increasing 65% over 2015. Phishing was behind the DNC hack, as well as a breach of government email accounts in Norway, and was the method that state-sponsored hackers recently used in an attempt to steal the passwords of prominent U.S. journalists. Phishing is on the rise for a simple reason: it is a relatively cheap and effective form of attack, and one that puts the security onus on the end-user. And, given that many users tend to reuse passwords, once these passwords are compromised, they can be used to break into other systems and bypass traditional network security measures. In response to the increased frequency of such authentication-based cyberattacks, governments around the world are pursuing policies focused on driving the adoption of multi-factor authentication (MFA) solutions that can prevent password-based attacks and better protect critical data and systems. The U.S., UK, EU, Hong Kong, Taiwan, Estonia and Australia are among the countries that have focused on this issue over the last five years. One challenge countries face: there are hundreds of MFA technologies vying for attention, but not all are created equal. Some have security vulnerabilities that leave them susceptible to phishing, such as one-time passwords (OTPs) — a password that is valid for only one login session or transaction — which, while more secure than single factor authentication, are themselves still shared secrets that can be compromised. Some solutions are unnecessarily difficult to use, or have been designed in a manner that creates new privacy concerns. As policymakers work to address these authentication issues, they will need to adopt solutions that move away from the shared secret model while also being easy for consumers and employee to use. Per a new white paper that The Chertoff Group published, governments can best ensure the protection of critical assets in cyberspace by following eight key principles for authentication policy: Have a plan that explicitly addresses authentication. While a sound approach to authentication is just one element of a proper approach to cyber risk management, any cyber initiative that does not include a focus on strong authentication is woefully incomplete. Recognize the security limitations of shared secrets. Policymakers should understand the limitations of first-generation MFA technologies such as OTPs that rely on shared secrets and look to incent adoption of more secure alternatives, such as those that utilize public key cryptography where keys are always stored on — and never leave — the user’s device, like FIDO authentication standards. Ensure authentication solutions support mobile. As mobile transaction usage grows, any policy that is not geared toward optimizing use of MFA in the mobile environment will fail to adequately protect transactions conducted in that environment. Don’t prescribe any single technology or solution — focus on standards and outcomes. Authentication is in the midst of a wave of innovation, and new, better technologies will continue to emerge. For this reason, governments should focus on a principles-based approach to authentication policy that does not preclude the use of new technologies. Encourage widespread adoption by choosing authentication solutions that are easy to use. Poor usability frustrates users and prevents widespread adoption. Next-generation MFA solutions dramatically reduce this “user friction” while offering even greater security gains. Policymakers should look for incentives to encourage use of next-generation MFA that addresses both security and user experience. Understand that the old barriers to strong authentication no longer apply. One of the greatest obstacles to MFA adoption has been cost — previously, few organizations could afford to implement first-generation MFA technologies. Today, there are dozens of companies delivering next-generation authentication solutions that are stronger than passwords, simpler to use and less expensive to deploy and manage. Know that privacy matters. MFA solutions can vary greatly in their approach to privacy — some track users’ every move or create new databases of consumer information. Such solutions raise privacy concerns and create new, valuable caches of information that are subject to attack. Thankfully, today several authentication companies have adopted a “privacy by design” approach that keeps valuable biometrics on a user’s device and minimizes the amount of personal data stored on servers. Use biometrics appropriately. The near ubiquity of biometric sensors in mobile devices is creating new options for secure authentication, making it easier to use technology such as fingerprint and face recognition. However, biometrics are best used as just one layer of a multi-factor authentication solution — matching a biometric on a device to then unlock a second factor. Ideally, biometrics should be stored and matched only on a device, avoiding the need to address privacy and security risks associated with systems that store biometrics centrally. Any biometric data stored on a server is vulnerable to getting in the wrong hands if that server is compromised. This was the case in June 2015 with the United States Office of Personnel Management (OPM) breach resulting in 1.1 million compromised fingerprints. Policymakers have resources and industry standards to help guide them as they address these principles. The Fast Identity Online (FIDO) Alliance has developed standards designed to take advantage of the advanced security hardware embedded in modern computing devices, including mobile phones. FIDO’s standards have been embraced by a wide cross-section of the technology community and are already incorporated into solutions from companies such as Microsoft, Google, PayPal, Bank of America, Facebook, Dropbox, and Samsung. No technology or standard can eliminate the risk of a cyberattack, but the adoption of modern standards that incorporate MFA can be an important step that meaningfully reduces cyber risk. By following these eight principles, governments can create a policy foundation for MFA that not only enhances our collective cyber security, but also helps to ensure greater privacy and increased trust online.

24 апреля, 15:00

There Will Always Be Limits to How Creative a Computer Can Be

Jennifer Maravillas for HBR Artificial intelligence is disturbing the workforce, and will continue to do so as its capabilities increase. Inevitably, “artificial intelligence will soon be able to do the administrative tasks that consume much of managers’ time faster, better, and at a lower cost.” But, when it comes to more complex and creative tasks such as innovation, the question still remains whether AI can do the job better than humans. There’s no doubt that recent advancements in AI have been extraordinary. Watson, for example, is now helping with cancer research and tax returns, among other things. And AlphaGo, a computer program designed to play the ancient board game Go, beat Lee Sedol, one of best players in the world, in a 4-1 landslide. Since Go is far more complex game than chess, AlphaGo’s victory was a major breakthrough. But it didn’t stop there. After its March 2016 victory, the Google-owned program racked up over sixty wins against online opponents, a streak which sent the world’s top players into a tailspin. “Humans have evolved in games in thousands of years—but computers now tell us humans are all wrong,” said Chen Yaoye, a Go expert from China. “I think no one is even close to know the basics of Go.” Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. Despite these advancements, computers still have limitations. For example, even though AlphaGo leaned on deep learning to make moves that increased its chances of winning, its programmers didn’t know why the system made certain moves. They could peer in to see such things as the values at the many digital “neurons” of a many-layered network, but much of the information was distributed across large swaths of the neural network and was not available to be summarized in a reasoned argument as to why a particular move was chosen. Until summarizing is possible, managers and CEOs who are experts in their various fields won’t trust a computer — or another human — that cannot offer a reason for why a creative yet risky move is the best one to make. Will Knight explores the many facets of this current limitation in his provocative article “The Dark Secret at the Heart of AI.” To add to that, my colleague Lee Spector and I have developed a mathematical proof that shows the limits to how creative a computer can be. Published in the journal Artificial Intelligence for Engineering Design, Analysis, and Manufacturing (AIEDAM), we found that the fastest modern supercomputer couldn’t list or explore all the features of an object/thing even if it had started working on the problem way back in the 1950s. When considering the Obscure Features Hypothesis for Innovation, which states that every innovative solution is built upon at least one new or commonly overlooked feature of a problem, you can see how AI may never advance enough to take the jobs of Chief Innovation Officers. Related Video Can You Entrust That Decision to a Robot? Find your place on the automation frontier. Save Share See More Videos > See More Videos > Further, computers cannot always fill in the gaps in our lack of understanding for how things work. Phone case makers dropped many encased smartphones on hard surfaces before selling their products. They did not just rely on computer simulations of dropped phones in cases. Our theories will always need actual empirical measurements to fill in the gaps in our understanding. Given these limitations, my co-author and I have proposed a solution: a new human-computer interface (HCI) that allows humans and computers to work together to counter each other’s weaknesses. Computers, for example, could prevent humans from falling prey to cognitive biases such as functional fixedness, design fixation, goal fixedness, assumption blindness, and analogy blindness. And humans could make up the creative deficiencies of computers. In order for this to work, the interface needs to be both human- and computer-friendly. We shouldn’t concern ourselves, therefore, about whether computers will overtake humans; instead, we should focus on designing a system that allows humans and computers to easily collaborate together so each partner can build upon the other’s strengths and counter the other’s weaknesses. Along the way, humans will rise to the challenge of working with a strong machine partner. In some ways, this is already happening. After losing the first three games to AlphaGo, and witnessing the computer system make novel moves that players hadn’t seen before, Lee Sedol made a highly creative move of his own and ended up winning the game. One can see more of this in the future. As we develop systems and they evolve, humans and computers will continue to challenge each other, and, as a result, both will become more innovative in the process. Further, designing the proper interface for human and computers to collaborate on the same problem will unleash a level of innovativeness that neither partner can achieve on their own.

20 апреля, 15:00

Thinking Through How Automation Will Affect Your Workforce

Today, executives have to cut through a lot of hype around automation. Leaders need a clear-eyed way to think about how these technologies will specifically affect their organizations. The right question isn’t which jobs are going to be replaced, but rather, what work will be redefined, and how? Based on our work with a number of organizations grappling with these issues, we’ve found that the following four-step approach can help. 1. Start with the work, not the “job” or the technology. Much work will continue to exist as traditional “jobs” in organizations, but automation makes traditional jobs more fluid and an increasing amount of work will occur outside the traditional boundaries of a “job.” Optimally integrating humans and automation requires greater ability to deconstruct work into discrete elements — that is, seeing the tasks of a job as independent and fungible components. Deconstructing and then reconfiguring the components within jobs reveals human-automation combinations that are more efficient, effective, and impactful. AI and robotics increasingly take on the routine aspects of both blue and white collar jobs, leaving the non-routine to humans. That challenges the very essence of what organizations retain as human work. The reconfiguration of these non-routine activities will yield new and different types of jobs. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. 2. Understand the different work automation opportunities. AI can support three types of automation: robotic process automation (RPA), cognitive automation, and social robotics. RPA automates high volume, low complexity, routine administrative “white collar” tasks — the logical successor to outsourcing many administrative processes, further reducing costs and increasing accuracy. Optimizing RPA can only be done when the work is deconstructed. For example, RPA will seldom replace the entire “job” of a call center representative. Certain tasks, such as talking a client through their frustration with a faulty product or mishandled order will, for now, remain a human task. Others, such as requesting customer identification information and tracking the status of a delivery are optimally done with RPA. Cognitive automation takes on more complex tasks by applying things like pattern recognition or language understanding to various tasks. For example, the Amazon Go retail store in Seattle has no cashiers or checkout lanes. Customers pick up their items and go, as sensors and algorithms automatically charge their Amazon account. Automation has replaced the work elements of scanning purchases and processing payment. Yet other elements of the “job” of store associate are still done by humans, including advising in-store customers about product features. Social robotics involves robots moving autonomously and interacting or collaborating with humans through the combination of sensors, AI, and mechanical robots. A good example is “driverless” vehicles, where robotics and algorithms interact with other human drivers to navigate through traffic. Deconstructing the “job” reveals that the human still plays an important role. While the human “co-pilot” no longer does the work of routine navigation and piloting, they still do things like observing the driverless operation, and stepping in to assist with unusual or dangerous situations. Indeed, it is often overlooked that the human co-pilot is actually “training” the AI-driven social robotics, because every time the human makes a correction, the situation and the results are “learned” by the AI system. 3. Manage the decoupling of work from the organization.  The future global work ecosystem will offer alternative work arrangements including each of the three automation solutions, along with human work sources such as talent platforms, contingent labor, and traditional employment. The human work that is created or remains after automation will not fit easily into traditional jobs, nor will it always be optimally sourced through employment. Work will need to be freed from “jobs within organizations,” and instead be measured and executed as more deconstructed units, engaged through many sources. Today’s supply chains track the components of products at both an atomized and aggregate level. Similarly, the new work ecosystem will develop a common language of work, enabling organizations not only to forecast and meet work demands from various sources, but to devise new reconfigurations of work elements that are best sourced in alternative ways. 4. Re-envision the organization. The combination of automation, work deconstruction, and reconfiguration will often redefine the meaning of “organization” and “leadership.” The “organization” must be reconsidered as a hub and capital source for an ecosystem of work providers. Those “providers” include AI and automation, but also include “human” sources such as employees, contractors, freelancers, volunteers, and partners. The optimal combination of these providers seldom appears if you frame the question as, “In which jobs will AI replace humans?” Only when you look within those jobs, as described above, will you discover the human-automation combinations that redefine work and how it should be organized. AI will significantly disrupt and potentially empower the global workforce. It won’t happen all at once or in every job, but it will happen, and leaders will need an automation strategy that realizes its benefits, avoids needless costs, and rests on a more nuanced understanding of work.

Выбор редакции
19 апреля, 16:00

Creating Simple Rules for Complex Decisions

Machines can now beat humans at complex tasks that seem tailored to the strengths of the human mind, including poker, the game of Go, and visual recognition. Yet for many high-stakes decisions that are natural candidates for automated reasoning, like doctors diagnosing patients and judges setting bail, experts often favor experience and intuition over data and statistics. This reluctance to adopt formal statistical methods makes sense: Machine learning systems are difficult to design, apply, and understand. But eschewing advances in artificial intelligence can be costly. Recognizing the real-world constraints that managers and engineers face, we developed a simple three-step procedure for creating rubrics that improve yes-or-no decisions. These rubrics can help judges decide whom to detain, tax auditors whom to scrutinize, and hiring managers whom to interview. Our approach offers practitioners the performance of state-of-the-art machine learning while stripping away needless complexity. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. To see these rules in action, consider pretrial release decisions. When defendants first appear in court, judges must assess their likelihood of skipping subsequent court dates. Those deemed low-risk are released back into the community, while high-risk defendants are detained in jail; these decisions are thus consequential both for defendants and for the general public. To aid judges in making these decisions, we used our procedure to create the simple risk chart below. Each defendant’s flight risk is computed by summing scores corresponding to their age and number of court dates missed. A risk threshold is then applied to convert the score to a binary release-or-detain recommendation. For example, with a risk threshold of 10, a 35-year-old defendant who has missed one court date would score an eight (two for age plus six for missing one prior court date), and would be recommended for release.   Despite its simplicity, this rule significantly outperforms expert human decision makers. We analyzed over 100,000 judicial pretrial release decisions in one of the largest cities in the country. Following our rule would allow judges in this jurisdiction to detain half as many defendants without appreciably increasing the number who fail to appear at court. How is that possible? Unaided judicial decisions are only weakly related to a defendant’s objective level of flight risk. Further, judges apply idiosyncratic standards, with some releasing 90% of defendants and others releasing only 50%. As a result, many high-risk defendants are released and many low-risk defendants are detained. Following our rubric would ensure defendants are treated equally, with only the highest-risk defendants detained, simultaneously improving the efficiency and equity of decisions. Decision rules of this sort are fast, in that decisions can be made quickly, without a computer; frugal, in that they require only limited information to reach a decision; and clear, in that they expose the grounds on which decisions are made. Rules satisfying these criteria have many benefits, both in the judicial context and beyond. For instance, easily memorized rules are likely to be adopted and used consistently. In medicine, frugal rules may reduce tests required, which can save time, money, and, in the case of triage situations, lives. And the clarity of simple rules engenders trust by revealing how decisions are made and indicating where they can be improved. Clarity can even become a legal requirement when society demands fairness and transparency. Related Video Can You Entrust That Decision to a Robot? Find your place on the automation frontier. Save Share See More Videos > See More Videos > Simple rules certainly have their advantages, but one might reasonably wonder whether favoring simplicity means sacrificing performance. In many cases the answer, surprisingly, is no. We compared our simple rules to complex machine learning algorithms. In the case of judicial decisions, the risk chart above performed nearly identically to the best statistical risk assessment techniques. Replicating our analysis in 22 varied domains, we found that this phenomenon holds: Simple, transparent decision rules often perform on par with complex, opaque machine learning methods. To create these simple rules, we used a three-step strategy, detailed here, that we call select-regress-round. Here’s how it works. Select a few leading indicators of the outcome in question — for example, using a defendant’s age and number of court dates missed to assess flight risk. We find that having two to five indicators works well. The two factors we used for pretrial decisions are well-known indicators of flight risk; without such domain knowledge, one can create the list of factors using standard statistical methods (e.g., stepwise feature selection). Using historical data, regress the outcome (skipping court) on the selected predictors (age and number of court dates missed). This step can be carried out in one line of code with modern statistical software. The output of the above step is a model that assigns complicated numerical weights to each factor. Such weights are overly precise for many decision-making applications, and so we round the weights to produce integer scores. Our select-regress-round strategy yields decision rules that are simple. Equally important, the method for constructing the rules is itself simple. The three-step recipe can be followed by an analyst with limited training in statistics, using freely available software. Statistical decision rules work best when objectives are clearly defined and when data is available on past outcomes and their leading indicators. When these criteria are satisfied, statistically informed decisions often outperform the experience and intuition of experts. Simple rules, and our simple strategy for creating them, bring the power of machine learning to the masses.

Выбор редакции
19 апреля, 09:00

Что несут для банков пять направлений технологического развития будущего?

Мы уже говорили о технологическом прогнозе Accenture на 2017 год. На примерах уже реализованных проектов давайте посмотрим, во что же выльются для банков обозначенные в этом исследовании направления развития.

18 апреля, 15:00

The First Wave of Corporate AI Is Doomed to Fail

Artificial intelligence is a hot topic right now. Driven by a fear of losing out, companies in many industries have announced AI-focused initiatives. Unfortunately, most of these efforts will fail. They will fail not because AI is all hype, but because companies are approaching AI-driven innovation incorrectly. And this isn’t the first time companies have made this kind of mistake. Back in the late 1990s, the internet was the big trend. Most companies started online divisions. But there were very few early wins. Once the dot-com bust happened, these companies shut down or significantly downscaled their online efforts. A few years later they were caught napping when online upstarts disrupted industries such as music, travel, news, and video, while transforming scores of others. In the mid-2000s, the buzz was about cloud computing. Once again, several companies decided to test the waters. There were several early issues, ranging from regulatory compliance to security. Many organizations backed off from moving their data and applications to the cloud. The ones that persisted are incredibly well-positioned today, having transformed their business processes and enabled a level of agility that competitors cannot easily mimic. The vast majority are still playing catch-up. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. We believe that a similar story of early failures leading to irrational retreats will occur with AI. Already, evidence suggests that early AI pilots are unlikely to produce the dramatic results that technology enthusiasts predict. For example, early efforts of companies developing chatbots for Facebook’s Messenger platform saw 70% failure rates in handling user requests. Yet a reversal on these initiatives among large companies would be a mistake. The potential of AI to transform industries truly is enormous. Recent research from McKinsey Global Institute found that 45% of work activities could potentially be automated by today’s technologies, and 80% of that is enabled by machine learning. The report also highlighted that companies across many sectors, such as manufacturing and health care, have captured less than 30% of the potential from their data and analytics investments. Early failures are often used to slow or completely end these investments. AI is a paradigm shift for organizations that have yet to fully embrace and see results from even basic analytics. So creating organizational learning in the new platform is far more important than seeing a big impact in the short run. But how does a manager justify continuing to invest in AI if the first few initiatives don’t produce results? Related Video A.I. Could Liberate 50% of Managers' Time Here's what they should focus on. Save Share See More Videos > See More Videos > We suggest taking a portfolio approach to AI projects: a mix of projects that might generate quick wins and long-term projects focused on transforming end-to-end workflow. For quick wins, one might focus on changing internal employee touchpoints, using recent advances in speech, vision, and language understanding. Examples of these projects might be a voice interface to help pharmacists look up substitute drugs, or a tool to schedule internal meetings. These are areas in which recently available, off-the-shelf AI tools, such as Google’s Cloud Speech API and Nuance’s speech recognition API, can be used, and they don’t require massive investment in training and hiring. (Disclosure: One of us is an executive at Alphabet Inc., the parent company of Google.) They will not be transformational, but they will help build consensus on the potential of AI. Such projects also help organizations gain experience with large-scale data gathering, processing, and labeling, skills that companies must have before embarking on more-ambitious AI projects. For long-term projects, one might go beyond point optimization, to rethinking end-to-end processes, which is the area in which companies are likely to see the greatest impact. For example, an insurer could take a business process such as claims processing and automate it entirely, using speech and vision understanding. Allstate car insurance already allows users to take photos of auto damage and settle their claims on a mobile app. Technology that’s been trained on photos from past claims can accurately estimate the extent of the damage and automate the whole process. As companies such as Google have learned, building such high-value workflow automation requires not just off-the-shelf technology but also organizational skills in training machine learning algorithms. As Google pursued its goal of transitioning into an AI-first company, it followed a similar portfolio-based approach. The initial focus was on incorporating machine learning into a few subcomponents of a system (e.g., spam detection in Gmail), but now the company is using machine learning to replace entire sets of systems. Further, to increase organizational learning, the company is dispersing machine learning experts across product groups and training thousands of software engineers, across all Google products, in basic machine learning. This all leads to the question of how best to recruit the resources for these efforts. The good news is that emerging marketplaces for AI algorithms and datasets, such as Algorithmia and the Google-owned Kaggle, coupled with scalable, cloud-based infrastructure that is custom-built for artificial intelligence, are lowering barriers. Algorithms, data, and IT infrastructure for large-scale machine learning are becoming accessible to even small and medium-size businesses. Further, the cost of artificial intelligence talent is coming down as the supply of trained professionals increases. Just as the cost of building a mobile app went from $200,000–$300,000 in 2010 to less than $10,000 today with better development tools, standardization around few platforms (Android and iOS), and increased supply of mobile developers, similar price deflation in the cost of building AI-powered systems is coming. The implication is that there is no need for firms to frontload their hiring. Hiring slowly, yet consistently, over time and making use of marketplaces for machine learning software and infrastructure can help keep costs manageable. There is little doubt that an AI frenzy is starting to bubble up. We believe AI will indeed transform industries. But the companies that will succeed with AI are the ones that focus on creating organizational learning and changing organizational DNA. And the ones that embrace a portfolio approach rather than concentrating their efforts on that one big win will be best positioned to harness the transformative power of artificial learning.

17 апреля, 19:01

Carmakers brace for sharp sales slowdown

GLOBAL carmakers converge on China for the Shanghai auto show this week, with the industry bracing for a sharp sales slowdown and potential price war as competition stiffens in the world’s biggest car

17 апреля, 15:00

To Get Consumers to Trust AI, Show Them Its Benefits

Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, there’s still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBM’s Watson. The public’s lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them. In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith. Three factors will be crucial to gaining this trust: 1.) performance — that is, the application performs as expected; 2.) process — that is, we have an understanding of the underlying logic of the technology, and 3.) purpose — that is, we have faith in the design’s intentions. Additionally, trust in the company designing the AI, and the way the way the firm communicates with customers, will influence whether the technology is adopted by customers. Too many high-tech companies wrongly assume that the quality of the technology alone will influence people to use it. Related Video A.I. Could Liberate 50% of Managers' Time Here's what they should focus on. Save Share See More Videos > See More Videos > In order to understand how firms have systematically enhanced trust in applied AI, my colleagues Monika Hengstler and Selina Duelli and I conducted nine case studies in the transportation and medical device industries. By comparing BMW’s semi-autonomous and fully autonomous cars, Daimler’s Future Truck project, ZF Friedrichshafen’s driving assistance system, as well as Deutsche Bahn’s semi-autonomous and fully autonomous trains and VAG Nürnberg’s fully automated underground train, we gained a deeper understanding of how those companies foster trust in their AI applications. We also analyzed four cases in the medical technology industry, including IBM’s Watson as an AI-empowered diagnosis system, HP’s data analytics system for automated fraud detection in the healthcare sector, AiCure’s medical adherence app that reminds patients to take their medication, and the Care-O-bot 3 of Frauenhofer IPA, a research platform for upcoming commercial service robot solutions. Our semi-structured interviews, follow-ups, and archival data analysis was guided by a theoretical discussion on how trust in the technology and in the innovating firm and its communication is facilitated. Based on this cross-case analysis, we found that operational safety and data security are decisive factors in getting people to trust technology. Since AI-empowered technology is based on the delegation of control, it will not be trusted if it is flawed. And since negative events are more visible than positive events, operational safety alone is not sufficient for building trust. Additionally, cognitive compatibility, trialability, and usability are needed: Cognitive compatibility describes what people feel or think about an innovation as it pertains to their values. Users tend to trust automation if the algorithms are understandable and guide them toward achieving their goals. This understandability of algorithms and the motives in AI applications directly affect the perceived predictability of the system, which, in turn, is one of the foundations of trust. Trialability points to the fact that people who were able to visualize the concrete benefits of a new technology via a trial run reduced their perceived risk and therefore their resistance to the technology. Usability is influenced by both the intuitiveness of the technology, and the perceived ease of use. An intuitive interface can reduce initial resistance and make the technology more accessible, particularly for less tech-savvy people. Usability testing with the target user group is an important first step toward creating this ease of use. But even more important is the balance between control and autonomy in the technology. For efficient collaboration between humans and machines, the appropriate level of automation must be carefully defined. This is even more important in intelligent applications that are designed to change human behaviors (such as medical devices that incentivize humans to take their medications on time). The interaction should not make people feel like they’re being monitored, but rather, assisted. Appropriate incentives are important to keep people engaged with an application, ultimately motivating them to use it as intended. Our cases showed that technologies with high visibility — e.g., autonomous cars in the transportation industry, or AiCure and Care-O-bot in the healthcare industry — require more intensive efforts to foster trust in all three trust dimensions. Our results also showed that stakeholder alignment, transparency about the development process, and gradual introduction of the technology are crucial strategies for fostering trust. Introducing innovations in a stepwise fashion can lead to more gradual social learning, which in turn builds trust. Accordingly, the established firms in our sample tended to pursue a more gradual introduction of their AI applications to allow for social learning, while younger companies such as AiCure tended to choose a more revolutionary introduction approach in order to position themselves as a technology leader. The latter approach has a high risk of rejection and the potential to cause a scandal if the underlying algorithms turn out to be flawed. If you’re trying to get consumers to trust a new AI-enabled application, communication should be proactive and open in the early stages of introducing the public to the technology, as it will influence the company’s perceived credibility and trustworthiness, which will influence attitude formation. In the cases we studied, users who could effectively communicate the benefits of an AI application had a reduction in their perceived risk, which resulted in greater trust, and a higher likelihood to adopt the new technology.

14 апреля, 14:00

How Companies Are Already Using AI

Every few months it seems another study warns that a big slice of the workforce is about to lose their jobs because of artificial intelligence. Four years ago, an Oxford University study predicted 47% of jobs could be automated by 2033. Even the near-term outlook has been quite negative: A 2016 report by the Organization for Economic Cooperation and Development (OECD) said 9% of jobs in the 21 countries that make up its membership could be automated. And in January 2017, McKinsey’s research arm estimated AI-driven job losses at 5%. My own firm released a survey recently of 835 large companies (with an average revenue of $20 billion) that predicts a net job loss of between 4% and 7% in key business functions by the year 2020 due to AI. Yet our research also found that, in the shorter term, these fears may be overblown. The companies we surveyed – in 13 manufacturing and service industries in North America, Europe, Asia-Pacific, and Latin America – are using AI much more frequently in computer-to-computer activities and much less often to automate human activities. “Machine-to-machine” transactions are the low-hanging fruit of AI, not people-displacement. For example, our survey, which asked managers of 13 functions, from sales and marketing to procurement and finance, to indicate whether their departments were using AI in 63 core areas, found AI was used most frequently in detecting and fending off computer security intrusions in the IT department. This task was mentioned by 44% of our respondents. Yet even in this case, we doubt AI is automating the jobs of IT security people out of existence. In fact, we find it’s helping such often severely overloaded IT professionals deal with geometrically increasing hacking attempts. AI is making IT security professionals more valuable to their employers, not less. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. In fact, although we saw examples of companies using AI in computer-to-computer transactions such as in recommendation engines that suggest what a customer should buy next or when conducting online securities trading and media buying, we saw that IT was one of the largest adopters of AI. And it wasn’t just to detect a hacker’s moves in the data center. IT was using AI to resolve employees’ tech support problems, automate the work of putting new systems or enhancements into production, and make sure employees used technology from approved vendors. Between 34% and 44% of global companies surveyed are using AI in in their IT departments in these four ways, monitoring huge volumes of machine-to-machine activities.   In stark contrast, very few of the companies we surveyed were using AI to eliminate jobs altogether. For example, only 2% are using artificial intelligence to monitor internal legal compliance, and only 3% to detect procurement fraud (e.g., bribes and kickbacks). What about the automation of the production line? Whether assembling automobiles or insurance policies, only 7% of manufacturing and service companies are using AI to automate production activities. Similarly, only 8% are using AI to allocate budgets across the company. Just 6% are using AI in pricing. Where to Find the Low-Hanging Fruit So where should your company look to find such low-hanging fruit – applications of AI that won’t kill jobs yet could bestow big benefits? From our survey and best-practice research on companies that have already generated significant returns on their AI investments, we identified three patterns that separate the best from the rest when it comes to AI. All three are about using AI first to improve computer-to-computer (or machine-to-machine) activities before using it to eliminate jobs: Put AI to work on activities that have an immediate impact on revenue and cost. When Joseph Sirosh joined Amazon.com in 2004, he began seeing the value of AI to reduce fraud, bad debt, and the number of customers who didn’t get their goods and suppliers who didn’t get their money. By the time he left Amazon in 2013, his group had grown from 35 to more than 1,000 people who used machine learning to make Amazon more operationally efficient and effective. Over the same time period, the company saw a 10-fold increase in revenue. After joining Microsoft Corporation in 2013 as corporate vice president of the Data Group, Sirosh led the charge in using AI in the company’s database, big data, and machine learning offerings. AI wasn’t new at Microsoft. For example, the company had brought in a data scientist in 2008 to develop machine learning tools that would improve its search engine, Bing, in a market dominated by Google. Since then, AI has helped Bing more than double its share of the search engine market (to 20%); as of 2015, Bing generated more than a $1 billion in revenue every quarter. (That was the year Bing became a profitable business for Microsoft.) Microsoft’s use of AI now extends far beyond that, including to its Azure cloud computing service, which puts the company’s AI tools in the hands of Azure customers. (Disclosure: Microsoft is a TCS client.) Related Video A.I. Could Liberate 50% of Managers' Time Here's what they should focus on. Save Share See More Videos > See More Videos > Look for opportunities in which AI could help you produce more products with the same number of people you have today. The AI experience of the 170-year-old news service Associated Press is a great case in point. AP found in 2013 a literally insatiable demand for quarterly earnings stories, but their staff of 65 business reporters could write only 6% of the earnings stories possible, given America’s 5,300 publicly held companies. The earnings news of many small companies thus went unreported on AP’s wire services (other than the automatically published tabular data). So that year, AP began working with an AI firm to train software to automatically write short earnings news stories. By 2015, AP’s AI system was writing 3,700 quarterly earnings stories – 12 times the number written by its business reporters. This is a machine-to-machine application of AI. The AI software is one machine; the other is the digital data feed that AP gets from a financial information provider (Zacks Investment Research). No AP business journalist lost a job. In fact, AI has freed up the staff to write more in-depth stories on business trends. Start in the back office, not the front office. You might think companies will get the greatest returns on AI in business functions that touch customers every day (like marketing, sales, and service) or by embedding it in the products they sell to customers (e.g., the self-driving car, the self-cleaning barbeque grill, the self-replenishing refrigerator, etc.). Our research says otherwise. We asked survey participants to estimate their returns on AI in revenue and cost improvements, and then we compared the survey answers of the companies with the greatest improvements (call them “AI leaders”) to the answers of companies with the smallest improvements (“AI followers”). Some 51% of our AI leaders predicted that by 2020 AI will have its biggest internal impact on their back-office functions of IT and finance/accounting; only 34% of AI followers said the same thing. Conversely, 43% of AI followers said AI’s impact would be greatest in the front-office areas of marketing, sales, and services, yet only 26% of the AI leaders felt it would be there. We believe the leaders have the right idea: Focus your AI initiatives in the back-office, particularly where there are lots of computer-to-computer interactions in IT and finance/accounting. Computers today are far better at managing other computers and, in general, inanimate objects or digital information than they are at managing human interactions. When companies use AI in this sphere, they don’t have to eliminate jobs. Yet the job-destroying applications of AI are what command the headlines: driverless cars and trucks, robotic restaurant order-takers and food preparers, and more. Make no mistake: Automation and artificial intelligence will eliminate some jobs. Chatbots for customer service have proliferated; robots on the factory floor are real. But we believe companies would be wise to use AI first where their computers already interact. There’s plenty of low-hanging fruit there to keep them busy for years.

13 апреля, 15:00

AI Adds a New Layer to Cyber Risk

Cognitive computing and artificial intelligence (AI) are spawning what many are calling a new type of industrial revolution. While both technologies refer to the same process, there is a slight nuance to each. To be specific, cognitive uses a suite of many technologies that are designed to augment the cognitive capabilities of a human mind. A cognitive system can perceive and infer, reason and learn. We’re defining AI here as a broad term that loosely refers to computers that can perform tasks that once required human intelligence. Because these systems can be trained to analyze and understand natural language, mimic human reasoning processes, and make decisions, businesses are increasingly deploying them to automate routine activities. From self-driving cars to drones to automated business operations, this technology has the potential to enhance productivity, direct human talent on critical issues, accelerate innovation, and lower operating costs. Yet, like any technology that is not properly managed and protected, cognitive systems that use humanoid robots and avatars — and less human labor — can also pose immense cybersecurity vulnerabilities for businesses, compromising their operations. The criminal underground has been leveraging this capability for years, using the concept of “botnets” — which distribute tiny pieces of code across thousands of computers programmed to execute tasks that mimic the actions of tens and hundreds of thousands of users, resulting in mass cyberattacks and spamming of email and texts, and even making major websites unavailable for large periods of time via denial of service attacks. Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society. In a digital world where there is greater reliance on business data analytics and electronic consumer interactions, the C-suite cannot afford to ignore these existing security risks. In addition, there are unique and new cyber risks associated with cognitive and AI technology. Businesses must be thoughtful about adopting new information technologies, employing multiple layers of cyber defense, and security planning to reduce the growing threat. As with any innovative new technology, there are positive and negative implications. Businesses must recognize that a technology powerful enough to benefit them is equally capable of hurting them. First of all, there’s no guarantee of reliability with cognitive technology. It is only as good as the information fed into the system, and the training and context that a human expert provides. In an ideal state, systems are designed to simulate and scale the reasoning, judgment, and decision making capabilities of the most competent and expertly trained human minds. But, bad human actors — say, a disgruntled employee or rogue outsiders — could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by “teaching” the computer to process data inappropriately. Second, cognitive and artificial intelligence systems are trained to mimic analytical processes of the human brain — not always through clear, step-by-step programming instructions like a traditional system, but through example, repetition, observation and inference. Related Video The Upside of Automating Part of Your Job Machines will do the things you didn't want to do anyway. Save Share See More Videos > See More Videos > But, if the system is sabotaged or purposely fed inaccurate information, it could infer an incorrect correlation as “correct” or “learn” a bad behavior. Since most cognitive systems are designed to have freedom, as humans do, they often use non-expiring and “hard-coded” passwords. A malicious hacker can use the same login credentials as the bot to gain access to much more data than a single individual is allowed. Security monitoring systems are sometimes configured to ignore “bot” or “machine access” logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time — and go largely undetected. In some cases, attempts to leverage new technology can have unintended consequences, and an entire organization can become a victim. In a now-classic example, Microsoft’s Twitter bot, Tay, which was designed to learn how to communicate naturally with young people on social media, was compromised shortly after going live when internet trolls figured out the vulnerabilities of its learning algorithms and began feeding it racist, sexist, and homophobic content. The result was that Tay began to spew hateful and inappropriate answers and commentary on social media to millions of followers. Finally, contrary to popular thinking, cognitive systems are not protected from hacks just because a process is automated. Chatbots are increasingly becoming commonplace in every type of setting, including enterprise and customer call centers. By collecting personal information about users and responding to their inquiries, some bots are designed to keep learning over time how to do their jobs better. This plays a critical role in ensuring accuracy, particularly in regulated industries like healthcare and finance that possess a high volume of confidential membership and customer information. But like any technology, these automated chatbots can also be used by malicious hackers to scale up fraudulent transactions, mislead people, steal personally-identifiable information, and penetrate systems. We have already seen evidence of advanced AI tools being used to penetrate websites to steal compromising and embarrassing information on individuals, with high-profile examples such as Ashley Madison, Yahoo and the DNC. As bad actors continue to develop advanced AI for malicious purposes, it will require organizations to deploy equally advanced AI to prevent, detect and counter these attacks. But, risks aside, there is tremendous upside for cyber security professionals to leverage AI and cognitive techniques. Routine tasks such as analyzing large volumes of security event logs can be automated by using digital labor and machine learning to increase accuracy. As systems become more effective at identifying malicious and unauthorized access, cybersecurity systems can become “self-healing” — actually updating controls and patching systems in real time — as a direct result of learning and understanding how hackers exploit new approaches. Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral — not an afterthought — to an organization’s information infrastructure.

12 апреля, 19:13

4 Stocks Set to Trump Earnings in the Technology Space

Technology has been one of the best-performing sectors in 2017.

25 октября 2014, 16:10

В США заявили, что их устраивает цена в $57 за баррель нефти

Даже если цена черного золота упадет до $57 за баррель, добыча сланцевой Нефти будет рентабельна. Сообщает российское издание «Вести» со ссылкой на аналитический отчет компании IHS, передает информационный ресурс OnPress.info. Еще год назад американским компания нужна была цена в $70, чтобы добыча сланцевой нефти была рентабельна, но с прогрессивным ростом технологий уже на сегодняшний день даже цена в $57 является приемлемой. В свою очередь в США заявили, что падение цен на нефть не пугают нефтяные компании страны, которые только за этот год просверлили 18 тысяч скважин. Как заявил генеральный директор компании Halliburton на данном этапе их цель — снижение цены за баррель нефти. Но это еще не все, согласно просчетов американской компании Accenture, запас повышения эффективности добычи нефти который существует на данный момент может привести к понижению стоимости добычи на 40%. Напомним, что главная смета Российской Федерации на 2015 год рассчитана исходя из цены на баррель нефти на отметке в $96. А один из нефтедобывающих гигантов РФ компания Лукойл в свой бюджет заложила стоимость черного золота на отметке в $80-85 за баррель. http://onpress.info/v-ssha-zayavili-chto-ix-ustraivaet-cena-v-57-za-barrel-nefti0015866?_utl_t=fb