Daily Tech Digest - September 30, 2018

How to successfully implement an AI system

null
Companies should calculate the anticipated cost savings that would be gained with a successful AI deployment, using that as a starting point for investment so that costs of errors or short falls on expectations are minimised if they occur. The cost savings should be based on efficiency gains, as well as the increased productivity that can be harnessed in other areas of the business by freeing up staff from administration tasks. This ensures companies do not over-invest at the beginning before seeing initial results and if changes are necessary they do not cannibalise potential ROI and companies can still potentially switch to other viable alternative use cases.  Before advising companies on what solution they should invest in, it's important to first establish what they want to achieve. Digital colleagues can provide a far superior level of customer service however, they require greater resource to set up.  Most chatbots are not scalable, once deployed they cannot be integrated into other business areas as they are designed to answer FAQs based on a static set of rules. Unlike digital colleagues, they cannot understand complex questions or perform several tasks at once.


How adidas is Creating a Digital Experience That's Premium, Connected, and Personalized

Take something like a product description. How do we really have the product descriptions and offerings so that if you're interested in sports we will help you find exactly the product that you need for the sport that you're interested in? We will also educate you and bring you back at different points in time to help you find out what you need when you need it, or with an engagement program. Ultimately, like the membership program, that it has something that's sticky, that you can give back to something, even more, you can participate in events and experiences. For us, a lot of it’s really deepening those experiences but also exploring new technologies and new areas. Omnichannel was kind of the original wave which happened and I said it was the freight train that came past us a couple of years ago. Now we're also looking at what those next freight trains are, whether it's technologies like blockchain or experiencing picking up a new channel. For example, we're working extensively with Salesforce on automation, how we can automate consumer experiences.


What Deep Learning Can Offer to Businesses


With the capabilities of artificial intelligence, the way the words are processed and interpreted can be changed dramatically. It turns out we can define the meaning of the word based on its position in the text without the need of using a dictionary. ... One of the most recent successful appliances of deep learning for image recognition came from Large Scale Visual Recognition Challenge, when Alex Krizhevsky applied convolutional neural networks to organize images from ImageNet, a dataset containing 1.2 million pictures, into 1,000 different classes. In 2012, Krizhevsky’s network, AlexNet, achieved a top-5 test error rate of 15.3%, outperforming traditional computer vision solutions with more than 10% accuracy. The experience of Alex Krizhevsky changed the landscape of the data science and artificial intelligence field from the perspective of the research and business application. In 2012, AlexNet was the only deep learning model at ILSVRC (ImageNet Large Scale Visual Recognition Competition). Two years later, in 2014, there were no conventional computer vision solutions among the winners.



Can Global Semantic Context Improve Neural Language Models?

Global co-occurrence count methods like LSM lead to word representations that can be considered genuine semantic embeddings, because they expose statistical information that captures semantic concepts conveyed within entire documents. In contrast, typical prediction-based solutions using neural networks only encapsulate semantic relationships to the extent that they manifest themselves within a local window centered around each word (which is all that’s used in the prediction). Thus, the embeddings that result from such solutions have inherently limited expressive power when it comes to global semantic information. Despite this limitation, researchers are increasingly adopting neural network-based embeddings. Continuous bag-of-words and skip-gram (linear) models, in particular, are popular because of their ability to convey word analogies of the type “king is to queen as man is to woman.”


Big Data and Machine Learning Won’t Save Us from Another Financial Crisis


Machine learning can be very effective at short-term prediction, using the data and markets we have encountered. But machine learning is not so good at inference, learning from data about underlying science and market mechanisms. Our understanding of markets is still incomplete. And big data itself may not help, as my Harvard colleague Xiao-Li Meng has recently shown in “Statistical Paradises and Paradoxes in Big Data.” Suppose we want to estimate a property of a large population, for example, the percentage of Trump voters in the U.S. in November 2016. How well we can do this depends on three quantities: the amount of data (the more the better); the variability of the property of interest (if everyone is a Trump voter, the problem is easy); and the quality of the data. Data quality depends on the correlation between the voting intention of a person and whether that person is included in the dataset. If Trump voters are less likely to be included, for example, that may bias the analysis.


Spending on cognitive and AI systems to reach $77.6 billion in 2022

Banking and retail will be the two industries making the largest investments in cognitive/AI systems in 2018 with each industry expected to spend more than $4.0 billion this year. Banking will devote more than half of its spending to automated threat intelligence and prevention systems and fraud analysis and investigation while retail will focus on automated customer service agents and expert shopping advisors & product recommendations. Beyond banking and retail, discrete manufacturing, healthcare providers, and process manufacturing will also make considerable investments in cognitive/AI systems this year. The industries that are expected to experience the fastest growth on cognitive/AI spending are personal and consumer services (44.5% CAGR) and federal/central government (43.5% CAGR). Retail will move into the top position by the end of the forecast with a five-year CAGR of 40.7%. On a geographic basis, the United States will deliver more than 60% of all spending on cognitive/AI systems throughout the forecast, led by the retail and banking industries.


5 ways industrial AI is revolutionizing manufacturing

artificial intelligence / machine learning / network
In manufacturing, ongoing maintenance of production line machinery and equipment represents a major expense, having a crucial impact on the bottom line of any asset-reliant production operation. Moreover, studies show that unplanned downtime costs manufacturers an estimated $50 billion annually, and that asset failure is the cause of 42 percent of this unplanned downtime. For this reason, predictive maintenance has become a must-have solution for manufacturers who have much to gain from being able to predict the next failure of a part, machine or system. Predictive maintenance uses advanced AI algorithms in the form of machine learning and artificial neural networks to formulate predictions regarding asset malfunction. This allows for drastic reductions in costly unplanned downtime, as well as for extending the Remaining Useful Life (RUL) of production machines and equipment. In cases where maintenance is unavoidable, technicians are briefed ahead of time on which components need inspection and which tools and methods to use, resulting in very focused repairs that are scheduled in advance.


Data Centers Must Move from Reducing Energy to Controlling Water

While it is a positive development that overall energy for data centers is being reduced around the globe, a key component that has — for the most part — been washed over is water usage. One example of this is the continued use of open-cell towers. They take advantage of evaporative cooling to cool the air with water before it goes into the data center. And while this solution reduces energy, the water usage is very high. Raising the issue of water reduction is the first step in creating ways our industry can do something about it. As we experience the continued deluge of the “Internet of Things”—projected to exceed 20 billion devices by 2020, we will only be able to ride this wave if we keep energy low and start reducing water usage. The first question becomes how can cooling systems reject heat more efficiently? Let’s say heat is coming off the server at 100 degrees Fahrenheit. The idea is to efficiently capture heat and bring it to the atmosphere as close to that temperature as possible — but it is all dependent on the absorption system.


AI and Automation to Have Far Greater Effect on Human Jobs by 2022

AI and Automation to Have Far Greater Effect on Human Jobs by 2022 (Infographic)
With the domination of automation in a business framework, the workforce can be extended to new productivity-enhancing roles. More than a quarter of surveyed businesses expect automation to lead to the creation of new roles in their enterprise. Apart from allotting contractors more task-specialized work, businesses plan to engage workers in a more flexible manner, utilizing remote staffing beyond physical offices and decentralization of operations. Among all, AI adoption has taken the lead in terms of automation for the reduction of time and investment in end-to-end processes. “Currently, AI is the most rapidly growing technology and will for sure create a new era of the modern world. It is the next revolution- relieving humans not only from physical work but also mental efforts and simplifies tasks extensively,” opined Kuppa. While human-performed tasks dominate today’s work environment, the frontier is expected to change in the coming years.


Modeling Uncertainty With Reactive DDD

Reactive is a big thing these days, and I'll explain later why it's gaining a lot of traction. What I think is really interesting is that the way DDD was used or implemented, say back in 2003, is quite different from the way that we use DDD today. If you've read my red book, Implementing Domain-Driven Design, you're probably familiar with the fact that the bounded contexts that I model in the book are separate processes, with separate deployments. Whereas, in Evan's blue book, bounded contexts were separated logically, but sometimes deployed in the same deployment unit, perhaps in a web server or an application server. In our modern day use of DDD, I’m seeing more people adopting DDD because it aligns with having separate deployments, such as in microservices. One thing to keep clear is that the essence of Domain-Driven Design is really still what it always was -- It's modeling a ubiquitous language in a bounded context. So, what is a bounded context? Basically, the idea behind bounded context is to put a clear delineation between one model and another model.



Quote for the day:


"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks


Daily Tech Digest - September 29, 2018

Optimizing Multi-Cloud, Cross-DC Web Apps and Sites

Latency, payload, caching and rendering are the key measures when evaluating website performance. Each round trip is subject to the connection latency. From the time the webpage is requested by the user to the time the resources on that webpage are downloaded in the browser is directly related to the weight of the page and its resources. The larger the total content size, the more time it will take to download everything needed for a page to become functional for the user. Using caching and default caching headers may reduce the latency since less content is downloaded and it may result in fewer round trips to fetch the resources, although sometimes round trips may be to validate that the content in the cache is not stale. Browsers need to render the HTML page and resources served to them. Client-side work may cause poor rendering at the browser and a degraded user experience, for example, some blocking calls (say 3rd party ads) or improper rendering of page resources can delay page load time and impact a user experience.


Lessons from the UK Government's Digital Transformation Journey

So many lessons! Some of my colleagues set out to document the higher level lessons. The result was an entire book -- Digital Transformation at Scale: Why the Strategy Is Delivery -- but there’s a huge amount more that couldn’t be included there. But top of the list is the importance of remaining focused on your purpose and your users’ needs. As technologists and agilists we can too easily be drawn into improving technology or simplifying processes without stepping back and asking why we have those things in the first place, or if the change we’re making is the right one. I’ve talked to a lot of teams in large organisations who have taken all the right steps in moving to agile but are still having trouble motivating their teams, and the missing piece is almost always being exposed directly to your users. Whether they’re end customers, or internal users, there’s nothing like seeing people use your products to motivate the team to make them better.


MissingLink.ai has launched this week to streamline and automate the entire deep learning life cycle for data scientists and engineers. “Work on MissingLink began in 2016, when my colleagues Shay Erlichmen [CTO], Rahav Lussato [lead developer], and I set out to solve a problem we experienced as software engineers. While working on deep learning projects at our previous company, we realized we were spending too much time managing the sheer volume of data we were collecting and analyzing, and too little time learning from it,” Yosi Taguri, CEO of MissingLink, wrote in a post. “We also realized we weren’t alone. As engineers, we knew there must be a more efficient solution, so we decided to build it. Around that time, we were joined by Joe Salomon [VP of product], and MissingLink was born.” The team decided to focus on machine learning and deep learning because of the potential to “impact our lives in found ways.” Machine learning has already been used for detecting diseases, in autonomous vehicles and in public safety situations, according to the company.


Big data architecture: Navigating the complexity

britusistock-938449134.jpg
First, there are the many different engines you might choose to run with your big data. You could choose Splunk to analyze log files, or Hadoop for large file batch processing, or Spark for data stream processing. Each of these specialized big data engines requires its own data universe, and ultimately, the data from these universes must come together—which is where the DBA is called in to do the stitching. But that's not all. Organizations are now mixing and matching on-premise and cloud-based big data-processing and data storage. In may cases, they are using multiple cloud vendors as well. Once again, data and intelligence from these various repositories must be blended together at some point, as the business requires. "This is a system integration problem that vendors need to help their clients solve," said Anoop Dawar, SVP of product management and marketing for MapR, a converged data platform for big data . "You have to not only be able to provide a platform for all of the different big data processing engines and data stores that are out there, but you must also be able to rapidly provide access to new big data processing engines and data stores as they emerge."


Key Difference Between The Cloud And The Data Center

Whilst the purpose of both is the same; storage, management, and maintenance of data – there is an evident architectural difference between both. So the first key difference is that a data center is land-based, in-house and has a physical setup with a physical presence of IT professionals working together as a team. On the other hand, a cloud is more like a virtual, physically non-existent store that is dependent on the internet and is accessible only by the user over the internet. There is a notable difference between the security that both offer. Of course, understandably cloud computing is less secure than data centers as the latter is an in-house setup and is liable to protect your security. On the contrary, cloud computing is internet-based which puts you at an increased risk of data leak and privacy invasion threats. Moreover, you are responsible for your own security with cloud computing because the third-party operator of the cloud is not liable for your data.


5 Easy Ways To Determine If Your Company Needs Blockchain


The purest form of blockchain is in tracking and authenticating a digital asset (music, movies, digital wallets, education certifications, mortgage contracts, and so on) with digital transactions logged against it. Blockchains can also track and authenticate physical assets (gold, organic food, artwork, manufactured parts, and such), though those assets can require checkpoints considered “off-chain.” In such cases, you’ll need trusted sources in your business network to audit and authenticate the physical asset, which can be tricky. Consider a notorious example from the aerospace industry. Some argue that well before the Challenger space shuttle disaster in 1986, some parties knew that the spacecraft’s O-ring seals contained a flaw, but this design and manufacturing problem wasn’t addressed properly. What if an aerospace industry blockchain was tracking the origin, specification, materials, and testing of that part and any known problems? Only once the integrity of that part and required tests had been confirmed by many trusted participants could the part be used.


Axon Conference Panel: Why Should We Use Microservices?

For Schrijver, it’s all about scalability. In terms of teams it’s the ability to work with multiple teams on one product. In terms of operations it’s the ability to independently scale different parts of a system. He thinks that if you build a microservices system the right way you can have almost unlimited horizontal scalability. Buijze pointed out that technically it doesn’t matter whether we work with monoliths or microservices; in theory you can scale out a monolith just as well as microservices. What microservices gives us is a strong and explicit boundary to every service. Although the architects draw limits for communication between components, we as developers are good at ignoring them. If it’s technically possible to directly communicate with another component we will do that, ignoring any rules the architects have set up. Keeping those boundaries intact is much easier when they are explicit and even more so if a component is managed by another team.


The rise of open source use across state and local government

GSA opens digital communities for AI and virtual reality
A simple solution for agencies looking to defend against open source vulnerabilities is to turn to enterprise open source providers. Enterprise-ready solutions undergo scrutinizing tests to ensure that any defect is detected, prevented, or addressed in a timely manner, thereby mitigating an agency’s risk. Even further, enterprise solutions protect government networks from these risks throughout the product lifecycle by ensuring the code is up-to-date, secure, and functioning as expected. Investing in future-oriented, enterprise open source solutions can also help lower the total cost of ownership. This is possible because agencies can sidestep the costly and painful vendor lock-in that comes with proprietary software. Instead, enterprise open source enables users to utilize software that is platform agnostic and enables the agency to make the hardware, operating system, and environment decisions that are optimal for their requirements and mission. At the end of the day, an enterprise open source solution provides government users with the best of both worlds.


Crowdstrike CTO on securing the endpoint and responding to a breach

The first was that a modern security platform had to be built as a native-cloud solution. The cloud was critical not just for ease of management and rapid agent rollouts, but also for protection of off-premise assets and workloads deployed in public and hybrid clouds. The cloud would also be used to dramatically reduce performance impact that an endpoint agent would have on a system as heavy processing work would be offloaded to an elastically scalable cloud compute. Finally, the cloud could leverage the power of crowdsourcing – collection of trillions of security-related events from endpoint agents deployed all over the world to learn from every adversary action and taking away their ability to reuse tradecraft as they launch attacks against new victims. The second principle was to leverage machine learning/artificial intelligence to predictively identify new threats by training algorithms on the largest dataset in the security industry – over a trillion events collected every single week by CrowdStrike Falcon agents protecting organisations in 176 countries.


What is Blockchain Technology? A Step-by-Step Guide For Beginners

What is Blockchain Technology?
Information held on a blockchain exists as a shared — and continually reconciled — database. This is a way of using the network that has obvious benefits. The blockchain database isn’t stored in any single location, meaning the records it keeps are truly public and easily verifiable. No centralized version of this information exists for a hacker to corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone on the internet. ... As revolutionary as it sounds, Blockchain truly is a mechanism to bring everyone to the highest degree of accountability. No more missed transactions, human or machine errors, or even an exchange that was not done with the consent of the parties involved. Above anything else, the most critical area where Blockchain helps is to guarantee the validity of a transaction by recording it not only on a main register but a connected distributed system of registers, all of which are connected through a secure validation mechanism.



Quote for the day:


"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley


Daily Tech Digest - September 28, 2018

7 Most Prevalent Phishing Subject Lines

(Image: Amy Walters - stock.adobe.com)
People are curious and they want to help, he continues, and it's these two qualities that make them susceptible to phishing attacks. When they do fall for scams, most employees are quick to realize it. "I'm really busy," "I missed that," "I should've caught that email," are all commonly heard phrases from victims who have opened malicious emails and realized they did wrong. "No matter how much technology you put in place to block them, stuff always gets through," Hayslip adds. Webroot recently scanned thousands of phishing emails from the past 18 months to learn more about the trends around common subject lines designed to trick targets. Hayslip presented the findings to about 100 fellow CISOs around the country and learned "almost everybody's seeing the same thing," he says. Financially related messages and notions of urgency are commonly seen in phishing emails, albeit under different subject lines. John "Lex" Robinson, cybersecurity strategist at Cofense echoes Hayslip's sentiments and says attackers are getting better and better at understanding the context of the emails they're sending and who they're targeting.


Agile IT held back by legacy tech and legacy budgeting


“Re-architecting and integrating applications is difficult work, and for many CIOs, this barrier is best overcome by seeking outside help and bringing in skilled application remediation experts from a third party,” the report said. A big majority of organisations (87%) say legacy applications are slowing their journey to creating an agile workspace, with the main causes cited as cost of re-architecting or transforming applications (68%), disruption to the user experience (43%), and a lack of in-house skills to modernise applications (36%). Evolving alongside this application challenge has been the shift towards cloud computing, with organisations looking to software-as-a-service (SaaS) applications to increase workspace agility. However, only 25% of organisations think SaaS applications meet their requirements, and this figure drops to 17% in mid-size organisations. Overall, 84% of organisations say an inability to roll out new services and applications to their workforce quickly is affecting business competitiveness.


Blockchain Applications That Are Transforming Society

Primitive forms of smart property exist. Your car-key, for instance, may be outfitted with an immobilizer, where the car can only be activated once you tap the right protocol on the key. Your smartphone too will only function once you type in the right PIN code. Both work on cryptography to protect your ownership. The problem with primitive forms of smart property is that the key is usually held in a physical container, such as the car key or SIM card, and can’t be easily transferred or copied. The blockchain ledger solves this problem by allowing blockchain miners to replace and replicate a lost protocol. ... Any material object is a ‘thing.’ It becomes an internet of things (IoT) when it has an on/ off switch that connects it to the internet and to each other. By being connected to a computer network, the object, such as a car, become more than just an object. It is now people-people, people-things, and things-things. The analyst firm Gartner says that by 2020 there will be over 26 billion connected devices. Others raise that number to over 100!


Quantum Computing: Why You Should Pay Attention


Typical computers rely on bits, which are represented by ones and zeros. Using just these two numbers, our computers can solve any arithmetic questions and have excellent logic capabilities. Quantum computers, on the other hand, replace bits with quantum bits, or qubits. Unlike their binary counterparts, qubits can exist as both ones and zeros at the same time, in a so-called superposition. This isn’t an analogy: According to the most common interpretation of quantum mechanics, qubits are actually ones and zero simultaneously. With this capability, qubits are able to solve certain problems that are computationally expensive using binary arithmetic and logic in far fewer steps, and some problems can be solved with just a single step. Although the very concept of quantum computing sounds outlandish, devices are being developed by tech giants including Intel and Google, and Microsoft is already unveiling toolkits for developing software for quantum computers.


What Is Reinforcement Learning - A Simple Explanation & Practical Examples


Similar to toddlers learning how to walk who adjust actions based on the outcomes they experience such as taking a smaller step if the previous broad step made them fall, machines and software agents use reinforcement learning algorithms to determine the ideal behavior based upon feedback from the environment. It’s a form of machine learning and therefore a branch of artificial intelligence. Depending on the complexity of the problem, reinforcement learning algorithms can keep adapting to the environment over time if necessary in order to maximize the reward in the long-term. So, similar to the teetering toddler, a robot who is learning to walk with reinforcement learning will try different ways to achieve the objective, get feedback about how successful those ways are and then adjust until the aim to walk is achieved. A big step forward makes the robot fall, so it adjusts its step to make it smaller in order to see if that's the secret to staying upright. It continues its learning through different variations and ultimately is able to walk.


Organisations are beginning to find cyber threats more effectively

Organisations are beginning to find cyber threats more effectively image
“Threat hunting is part of nonstandard security operations. It’s a good combination of threat intelligence and hypothesis generation based on likely and probable locations of intrusions into a network. Once an organisation begins consuming threat intelligence, natural hunting begins to take place,” said Robert M. Lee, SANS certified instructor and co-author of the report. Rob T. Lee, co-author and curriculum lead for digital forensic and incident response training, SANS Institute added: “One of the most notable highlights of the 2018 survey is that it demonstrates a more accurate use of threat hunting in many organisations. This change in threat hunting practices has increased since the last survey in 2017, which showed many organisations typically were hunting incorrectly through traditional intrusion detection. In this year’s survey, many more organisations were using proper threat intelligence to help identify the best locations inside an organisation’s network to look for anomalistic behaviours that are direct indicators of threats.”


“Everything is fine” vs. “we’re doomed” isn’t the way to frame election security

russian hacking us election  putin voting fraud hacked
Humans are really bad at assessing risk. We tend to fixate on catastrophic but unlikely occurrences—like terrorism, for example—while ignoring mundane risks that cause cumulative harm such as eating poorly, or not maintaining bridges, or failing to save for retirement. This difficulty in assessing and responding to risk is especially pronounced in information security, where non-technical people, in particular, find themselves forced to choose between extreme paranoia (and thus, a defeatist attitude) and unrealistic optimism ... Mitigations that improve, but by definition do not perfect, security are worth nothing if we are not able to calibrate our trust to the level of security they provide. I trust that the lock on the front door to my apartment is good enough to withstand all but the most determined attacks. But if someone with a battering ram, explosives, or a talented black bag team want to get into my apartment, I know that I can't prevent intrusion by those kinds of attackers. Nevertheless, I don't stay awake at night obsessing over unlikely threats or threats I cannot defend against.


Analytics Translator – The Most Important New Role in Analytics


The role of Analytics Translator was recently identified by McKinsey as the most important new role in analytics, and a key factor in the failure of analytic programs when the role is absent. As our profession of data science has evolved, any number of authors including myself has offered different taxonomies to describe the differences among the different ‘tribes’ of data scientists. We may disagree on the categories but we agree that we’re not all alike. Ten years ago, around the time that Hadoop and Big Data went open source there was still a perception that data scientists should be capable of performing every task in the analytics lifecycle.  The obvious skills were model creation and deployment, and data blending and munging. Other important skills in this bucket would have included setting up data infrastructure. And finally the skills that were just assumed to come with seniority, storytelling, and great project management skills. Frankly, when I entered the profession, this was true and for the most part, in those early projects, I did indeed do it all.


Shell CTO Yuri Sebregts talks about using AI to amplify the human impact of its workforce


As well as the predictive maintenance project, the company has also created a service called Machine Vision using Azure-based deep learning technologies that combines CCTV footage with internet of things devices to alert employees at its service stations to potential safety hazards occurring on the forecourt in real time, such as someone lighting a cigarette or driving erratically close to a petrol pump. There is also potential for this technology to be applied in a stocktaking context in Shell’s warehouses and petrol stations, says Sebregts, so that staff can intervene and replenish suppliers, as and when needed. Beyond its retail sites, robotics is already commonly used to install equipment in offshore environments where it would be hazardous and impossible to send humans, and Sebregts also sees potential for AI to enhance how that work is carried out in future.


How Data Security Improves When You Engage Employees in the Process

A great example of inclusive programming is anti-phishing training, which teaches employees to identify fraudulent attempts to obtain sensitive information electronically, often for malicious reasons, under the guise of a trustworthy source. In order for this training to be successful, employees must learn how to make choices when they receive potential phishing emails. Experiential training with real-world simulations — where employees build their knowledge base and ability to make choices in the moment, as it relates to them and their learning style — has proved to be effective. According to the research from Herman Miller Learning Pyramid, learning by doing yields a 75% knowledge retention rate compared with 5% relying on lectures. Giving employees a choice of password management software to use to achieve company security may also foster an environment of partnership versus rigid control.



Quote for the day:


"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer


Daily Tech Digest - September 27, 2018

4 factors to rethink when leading innovation

You may have been quite good at planning. Perhaps it helped you achieve your current leadership role. However, the age of planning is over, Whitehurst says. Processes such as performance management need to be dramatically shortened and focus more on experimentation. Innovation, he says “requires greater comfort with ambiguity all the way through your organization.” A prerequisite for that comfort is an understanding of corporate strategy. “Most leaders are saying, ‘My people really need to know the details of their job function. They don’t necessarily need to know the strategy of the whole company,’” Whitehurst says. “But to be more innovative, you have to flip that around. Everybody needs to know the strategy of the company and how they fit into it.” ... And you frankly want to be kind of ambiguous down to what the organization and the individual needs to do – because that’s where you’re giving people latitude to try.


The way companies look to fend off cyberattacks needs a rethink

When it comes to password protection, the only game-changer from the hacker’s perspective is in its length and uniqueness. Passwords with more than 25 characters that are unique rarely get reused and they also demand that users be innovative when creating them. When they are encrypted inside the solution, software or service, they will be the last ones to be cracked.  Password manager applications are a good solution too. But if you are concerned about their level of security, given that nearly all of them have been hacked, a poem or a shopping list would serve well as a password. It would do a better job in minimising the chances of being hacked than the typically-advised uppercase and special character tweaks. We have discovered from our research that more than 4.3 billion passwords to different online accounts worldwide have been breached over the past three years. That is one in every three accounts. So it is increasingly likely that your organisation’s information is also vulnerable.


Data center admins gain the benefits of microservices

Microservices adoption
Microservices are applications coupled into a collection of services that implement business functionalities. To maintain certain software functions on a more regular basis, developers can break out software components -- or services -- to form a distributed system. This makes the technology a good fit for cloud-based or on-premises data center deployments because it is an architectural pattern that can be tailored to developers' needs. Docker and automation software, which many data center admins are already familiar with, are key enablers of microservices. "Microservices do not require gold-plated, expensive, dedicated hardware," said Ian McCarty, president and CEO of PhoenixNAP Global IT Solutions. "They can run on shared clusters on top of commodity [data center] hardware, which is easier to scale and replace." To get a microservices application up and running, admins must use an infrastructure with a low-latency connection. Admins can turn to automation software to ease deployment, because they must deploy each application component separately -- often within containers.



An interview with Robert Fink, Architect of Foundry, Palantir’s open data platform

The notions of open platforms and open architectures originate in hardware design and describe systems in which different components can be added, replaced, or upgraded independently. Buyers like this idea, because it reduces vendor lock-in and increases flexibility and negotiating power. This translates more or less directly to software platforms: they are considered open if their inter-component APIs follow open standards, are well documented, and can be accessed by any party through readily available tools and libraries. This is in contrast to closed APIs whose internals are undocumented and often intentionally cryptic. Moreover, use of closed APIs is typically governed by license agreements that prohibit third-party tools, or even outright ban any external use of its data. At Palantir, we lean on open standards like JSON and HTTP for APIs, and open-source technologies for data storage and transformations. In the early days of computing, the majority of commercial platforms were closed (because, hey, who doesn’t like a good monopoly?) and this led to the siloed compute and data infrastructures that most IT organizations on this planet are still trying to unwind today.


What the device-as-a-service (DaaS) trend is all about

neon sign change management agent career promotion shift start begin by ross findon via unsplash
Initially, major OEMs (original equipment manufacturers), like Dell, Hewlett-Packard and Lenovo. That made sense, since a big part of PCaaS was the faster hardware replacement cycle inherent to subscriptions; computer makers were the most motivated to pitch such plans, especially when shipments stalled, then sank, in large part because consumers abandoned PCs. Although vendors like those named above remain major players, others are eager to get in on the action. Microsoft, for one. Its "Microsoft Managed Desktop," or MMD, will include the usual bits and pieces - although restricted to Microsoft's own hardware for now - but because Microsoft makes the operating system, it has some unique strengths: It runs Windows 10's update/upgrade servicing and it pulls a wealth of data from devices running the OS through its mandatory telemetry. Microsoft has the edge over computer makers in the device management parts of PCaaS. Telemetry is key, Gartner analyst Stephen Kleynhans agreed in an interview last week.


Manage Insider Threat Risk and Prevent ‘Big Brother’ Perception


Introducing the idea of insider threat risk management to your workforce and enterprise risk planning agenda can be challenging. Your insider risk management plan may be met with a lack of workforce understanding of proposed policy changes, concern that it is going to set up ‘security speed bumps’ that impede workflows, or fear that it will invade their privacy. This means that the onus is on you to determine how to get your organization’s risk management naysayers onboard your well-intentioned security vessel. How can your organization introduce the idea of insider risk management to the workforce with as much transparency and inclusivity as possible? I learned the importance of bringing the workforce onboard through years of counterintelligence and insider threat mitigation in government and industry. ... Remind them that the impacts of an insider incident transcend stock price and shareholder value and could put the physical safety of staff and personnel at risk.


Grasp container basics to plan enterprise adoption


Before container basics comes container history. Physical hardware systems have an inherent risk of inefficient use and the inherent benefit of application isolation. To address inefficiency, the IT industry adopted multitasking systems, which run several applications at once, but that simple form of resource sharing doesn't separate the applications enough. One app can contaminate the performance of other apps if it behaves badly, and attackers may even be able to breach security from one app to another. This tradeoff between isolation and efficiency is inherent in virtualization because of shared resources. Perfect security and performance management requires physical isolation in bare metal. Highest efficiency calls for multitasking OSes. Virtualization options fall between these extremes. Virtual machines (VMs) replicate the server, with a full OS and middleware. Hypervisor software manages and runs these VMs on physical resources.


A cybercrime epidemic drives business–and funding–to cybersecurity

The artificial intelligence company’s growth highlights the demand for sophisticated defenses against hackers as well as Europe’s increasing capacity to spin up top-tier tech companies. “What drives the valuation is our response to the things covered in the media,” said Darktrace CEO Poppy Gustafsson in an interview. The five-year-old tech company was created by University of Cambridge mathematicians, and its software is designed to monitor everything from cloud networks and internet-of-things devices to industrial control systems. Cyber security is in demand for good reasons. As more data is kept online, the potential for breaches is increasing. At the same time, more of our devices are connected to the internet, giving hackers new ways to snoop on corporate board meetings, spy on political activists, or even damage important utilities. Data theft costs companies an average of $3.9 million, while “mega breaches” of more than 1 million compromised records can cost businesses as much as $350 million, according to a study sponsored by IBM Security and conducted by the Ponemon Institute.


Scrum The Toyota Way


We have learned that agility is hard, really hard. There is also no such thing as an agile transformation. You fundamentally have to change your operating model, and undertake an organizational transformation to achieve the agility you desire. Scrum is but one item in the toolbox to help you do this. You also need a sense of urgency. If the C Suite don’t see a compelling reason to change, chances are you’ll actually make things worse by messing with the current condition, and the resistance to change will be overwhelming, with no mandate to actually achieve that change. I’ve also realized that not everyone needs to be agile! If you’re shipping concrete slabs you probably don’t need to do that in two-week sprints, as the need to change rapidly is not there. Sure, Scrum will give you a planning cadence, but Scrum was intended to work in complex domains and with complex systems. These are areas where a linear approach and fixed thinking are not effective.


5 Competencies You Need to Succeed in DevOps and Beyond

Culture is a set of shared organizational assumptions that are learned by teams as they resolve problems. Integrated into day-to-day norms, culture is considered the correct way to perceive, think, and feel in relation to problems. Within DevOps, culture evolves to include implementation of continuous improvement, the building of collective intelligence, and continuously react to feedback—all based on trust. Transformational change requires strong leaders at all levels with core competencies in visioning, strategic management, flexibility, and the ability to inspire others to innovate and perform. Learning a set of skills does not make you a leader; instead leadership embodies an innate passion, integrity, authenticity, and courage. DevOps, like any other environment, requires leaders who are champions – individuals with advanced communications skills, the knowledge of diverse cultures, and people who behave collaboratively when working in teams. DevOps leadership is not something you learn and just do.



Quote for the day:


"Create A Fun And Happy Environment And Success Will Soon Be A Guest." -- Gordon TredGold


Daily Tech Digest - September 26, 2018


The old ways of operating the network are rapidly becoming obsolete. The CLI is excellent for troubleshooting or small changes, but in increasingly complex infrastructure it is not feasible to only use the CLI. Today, the network is programmable. Key functions are automated. Network engineers are network programmers and need new tools and most importantly…APIs. That is where I make the comparison between networking engineers and Frodo Baggins – i.e. wishing that this change had not been thrust upon them “in their time.” I have talked to many engineers and hear their frustration and fears of being left out in that change or that their CCIE is not as valuable as before. However, although the change might be scary, there are a lot of reasons for not worrying too much. The first is that knowledge of key concepts is still immensely valuable and probably even more so than before. When your infrastructure is more complicated, taking the proper approach to designing a change – a network refresh, or defining how you are going to operate it – is more complicated.



Augmented intelligence: The clearest path to focused AI?

The Eye demonstrated great accuracy when detecting objects on the road, but the AI's primary task is to build behavior patterns. Of course, there are several parameters involved, such as where an event happens, under what conditions and whether there are pedestrians on the road. The system checks what's normal under these circumstances to what is currently happening, and if it is beyond a certain threshold, it will send an alert to all cars within that proximity. While access to information about every car on the road sounds like a privacy nightmare, Discoperi has already taken steps to ensure privacy as well as give users full control over their data by storing the data on a blockchain. Privacy might seem more like a problem for augmented intelligence because it involves human input, while artificial intelligence is theoretically fully autonomous. But due to the real shortcomings in AI, many companies have already used humans behind the scenes to complete AI's job where it failed, raising privacy concerns in AI.


Microsoft Adds Features to Teams, Beefs Up Security

Collaboration
Microsoft sees Teams as a critical interface to the entire line of Office 365 and Microsoft 365 services. Added to Office 365 commercial licenses 18 months ago, Teams has become the most rapidly adopted business application in Microsoft’s history, company officials revealed at Ignite, where the company said that 329,000 organizations now use Teams. It was at last year’s Ignite conference when Microsoft revealed its plans to integrate the Skype for Business Online voice, video, chat and screen-sharing capabilities into Teams. It was a surprising move since Microsoft had just rolled out Teams six months earlier. The entire set of communications capabilities in Skype for Business Online became available in Teams last month. Over time, Microsoft intends to depreciate Skype for Business; it hasn’t said when, but has signaled organizations to plan migration for existing customers with a call to action for partners to facilitate that planning. New Office 365 subscriptions now only include the Teams client – not Skype for Business – said Ron Markezich


Serverless Platforms Compared for Performance

Serverless providers charge for not just CPU, memory and number of requests, but also for network and storage. Providers differ in how they adjust memory for specific CPU requirements. AWS, for example, gives more CPU cycles (PDF) to instances with higher memory. Google follows a similar strategy, whereas Azure varies in how CPU is allocated with "4-vCPU VMs tending to gain higher CPU shares". Concurrent requests change the average response time of a function. For non-concurrent requests, the resource allocation remains almost same for all providers except for Google, where it varies around 30%. The compute time in AWS increased by 46% for concurrent requests when the same call was invoked 50 times at once. For Google and Azure it was 7% and 3% respectively, whereas it increased by 154% in IBM. Other tests reveal AWS to have the best performance in terms of concurrent execution.


What is XaaS? A way to inject agility into your digital business

What is XaaS? A way to inject agility into your digital business
XaaS evolved from the cloud services model, Loucks says. Seeking cheaper, more efficient ways to run IT, CIOs began adopting cloud for anything from email to CRM, to ITSM and business intelligence visualization, to compute, storage and even networking, via software-defined networks, says Loucks. XaaS describes on-demand services that achieve scale horizontally across the business. Seventy-one percent of 1,170 IT and line-of-business professionals Deloitte surveyed from large U.S. companies said that XaaS makes up more than half of their enterprise IT. Learning from their vendor partners along the way, savvy CIOs began co-opting XaaS best practices as their own blueprint for building and delivering new IT services to their business peers. "Rather than simply using flexible consumption models to cut costs and increase workforce efficiency, many organizations are adopting XaaS to transform digitally and become more agile," Loucks says in a new Deloitte report, "Accelerating agility with XaaS."


A look at the new Google cloud security tools and features

The first new Google cloud security feature introduced by GCP is known as shielded VMs. This feature allows customers to enable a virtual Trusted Platform Module -- or vTPM, as Google calls it -- that supports integrity validation for boot processes and the kernel of the VM, as well as logging all the integrity checks with the Google Stackdriver logging and monitoring service. There is no additional charge to use shielded VMs, which makes this a great opportunity to improve the resiliency and security of compute workloads in the Google cloud environment. GCP also has a number of powerful new features for container deployments. Containers require access to repositories to install and configure software packages. However, there are many known concerns and issues with trust validation and security for repositories and software distribution, particularly in open source environments. To aid in securing code registries and repositories, GCP now offers the Container Registry, a private registry in which approved Docker images can be stored. 


Edge computing is the place to address a host of IoT security concerns

Edge computing is the place to address a host of IoT security concerns
Placing a gateway between the industrial endpoints and the rest of a company’s computing resources lets businesses implement current security and visibility technology without ripping and replacing expensive and IIoT machinery. The edge model also helps IIoT implementations in an operational sense, by providing a lower-latency management option than would otherwise be possible if those IIoT endpoints were calling back to a cloud or a data center for instructions and to process data. Most of the technical tools used to secure an IoT network in an edge configuration are similar to those in use on IT networks – encryption, network segmentation, and the like. Edge networking creates a space to locate security technologies that limited-capacity endpoints can’t handle on their own. Mike Mackey is CTO and vice president of engineering at Atonomi, makers of a blockchain-based identity and reputation-tracking framework for IIoT security. He said edge computing adds an important layer of trust between a company’s backend and its potentially vulnerable IIoT devices.


First known malicious cryptomining campaign targeting Kodi discovered

Researchers discover malicious cryptomining campaign targeting Kodi
As it turns out, some cyber thugs actually decided Kodi would be a good malware distribution platform. Researchers at ESET detected the first publicly known cryptomining campaign launched via the Kodi platform. If you use add-ons to enhance your movie or TV viewing pleasure, then it is possible your Windows- or Linux-based Kodi has been secretly mining Monero for months and months. In fact, it may continue to do so unless you take action. After the XBMC repository for add-ons was shut down, ESET discovered the repository had been part of a cryptomining campaign that went back to December 2017. That repository was added to the Bubbles and Gaia add-on repositories in December 2017 and January 2018. ESET warned, “From these two sources, and through update routines of unsuspecting owners of other third-party add-on repositories and ready-made Kodi builds, the malware spread further across the Kodi ecosystem.”


Businesses that take humans with them on robotics journeys gain the most


While investing in automation and AI technologies can cut costs and increase productivity dramatically, there is more to be gained if organisations focus on up-skilling staff that are freed from the tasks being automated. The research looked at the differences in performance between organisations that exclusively focus productivity gains from technology and those that focus on the technology and the human workforce. It found that a hyper-productive environment can be achieved if organisations enable humans and robots work together. Chris Brauer, director of innovation in the Institute of Management Studies at Goldsmiths University, lead the research. “In the public discussion, there has been an assumption that [humanity and automation] are in conflict in that in pursuing higher levels of performance and productivity in an organisation through technology you would have to sacrifice humanity,” he said. “But this does not follow from anything we have learned.”


How a new generation of security firms is learning to protect blockchain code

Some unsafe code can be detected with automated analysis tools without much human intervention: If a contract allows any user to extract its funds, it’s probably a mistake, says Petar Tsankov, cofounder and chief scientist of ChainSecurity, a Swiss startup spun out from the prestigious technical university ETH Zurich. ChainSecurity has developed a tool called Securify, which can quickly spot and flag potential issues in Solidify code. But other bugs are only visible as flaws within the context of what a contract is actually supposed to do, meaning the first phase of a security audit often involves sitting down with developers to understand exactly what their contracts are hoping to accomplish. “Typically, there’s very informal documentation on what the contract is supposed to do,” says Tsankov. Then typically comes a mix of human analysis and automated tests to determine if it’s possible to get the contract to violate its specifications. Trail of Bits has developed a tool called Echidna that can quickly execute smart contracts with a variety of inputs, looking for ways to get the code to misbehave.



Quote for the day:


"Believe in yourself and all that you are. Know that there is something inside you that is greater than any obstacle." -- Christian D. Larson


Daily Tech Digest - September 25, 2018

Machine Learning in Robotics - 5 Modern Applications
Imitation learning is closely related to observational learning, a behavior exhibited by infants and toddlers. Imitation learning is also an umbrella category for reinforcement learning, or the challenge of getting an agent to act in the world so as to maximize its rewards. Bayesian or probabilistic models are a common feature of this machine learning approach. The question of whether imitation learning could be used for humanoid-like robots was postulated as far back as 1999. Imitation learning has become an integral part of field robotics, in which characteristics of mobility outside a factory setting in domains like domains like construction, agriculture, search and rescue, military, and others, make it challenging to manually program robotic solutions. Examples include inverse optimal control methods, or “programming by demonstration,”which has been applied by CMU and other organizations in the areas of humanoid robotics, legged locomotion, and off-road rough-terrain mobile navigators.



WannaCry and NotPetya inspiring new attacks


Cyber crime is a business, and market forces, such as the rise in cryptocurrency values, will continue to shape where adversaries focus their efforts,” said Raj Samani, McAfee fellow and chief scientists. “Cryptomining malware is simpler, more straightforward, and less risky than traditional cyber crime activities – causing these schemes to skyrocket in popularity over the last few months. In fact, cryptomining malware has quickly emerged as a major player on the threat landscape. “Organisations need to remain vigilant to these threats – particularly in today’s cloud-first landscape, when many companies are seeing a rapid increase in cloud applications and environments to secure,” he said. To keep crypto-criminals at bay, Samani said businesses must find the right combination of people, process and technology to protect their assets, detect cryptomining threats and, when targeted, rapidly correct systems in the cloud and on-premise.


How Enterprises Can Scale their Machine Learning

Image: NicoElNino/iStockphoto
"To support the explosion of enterprise use cases, teams need to get bigger and, simultaneously, predictive analytics and machine learning tools need to support these teams as well as the larger community of business people, data engineers, software developers, and AI engineers." Forrester predicts that "massive machine learning automation is the future of data science because it will make data science teams exponentially more productive." The report says that the CRISP-DM process is too sequential and too manually iterative to perform the job. Tools to realize this future productivity will incorporate a few other elements as well. They need to integrate with software development and continuous integration tools. That's important for the AppDev teams to be able to use this work in their design, development and application deployment efforts. And second, these tools must keep up with open source innovations, such as deep learning, Forrester said.


How Java EE development has kept up with microservices


In 20 years, Java EE development has gone through several iterations and added many new features. For instance, open source projects, like Hibernate and Spring, were designed to sit on top of Java EE and address initial shortcomings with specifications, particularly in the area of persistence. "Java EE grew to include functionality from these projects to maintain its appeal to developers," said Simon Ritter, deputy CTO of Azul Systems, a Sunnyvale, Calif.-based company that develops runtime platforms for executing Java-based applications. "Likewise, when web services became a popular technology, Java EE was expanded to include this." Enterprise application development teams are still in the early stages of microservices architecture adoption. However, the Java EE specifications did not provide support in time, Ritter said. As a result, the Eclipse Foundation developed the MicroProfile specification outside the Java Community Process.


A CTO guide: Cyber security best practice tips

A CTO guide: Cyber security best practice tips image
“You need to make sure you’re on the latest technology and keep systems patched and up-to-date. The WannaCry was a good example of that. With many of the systems that got breached, the threat vector was a vulnerability that should have been patched.” “Organisations that keep their technology on the latest version and on the latest patch go a long way to keeping their systems protected. So, before I even start talking about advanced threat protection and some of the more clever sophisticated stuff, doing the basics right, getting that hygiene right is vitally important.” ... “The second aspect is that most breaches come via credential compromise of some form. It doesn’t come from a zero-day vulnerability on the backend, hacking a server. It comes from credential theft or some other form of brute force password guessing.” “There’s a variety of phishing attacks, and a breach often comes from compromise in the user credentials. Once the user credentials are compromised, then the attacker can traverse internally, laterally across the network and get access to more stuff.”


10 Ethical Issues Of Artificial Intelligence And Robotics

Existing laws have not been developed with AI in mind, however, that does not mean that AI-based product and services are unregulated. As suggested by Brad Smith, Chief Legal Officer at Microsoft, "Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices". Policymakers, researchers and professionals should work together to make sure that AI and robotics provide a benefit to humanity. ... Should robots have rights? If we think of a robot as an advanced washing machine, then no. However, if robots were able to have emotions or feelings, then the answer is not that clear. One of the pioneers of AI, Marvin Minsky, believed that there is no fundamental difference between humans and machines, and that artificial general intelligence is not possible without robots having self-concious emotions.


Cities Paying Ransom: What Does It Mean for Taxpayers?


On September 1, Ontario’s Municipal Offices experienced a cyberattack that left their computers inoperable when Malware entered its systems and rendered its servers useless. The municipality was faced with paying a ransom to the attackers or face the consequences of being locked out of its systems. Per the advice of a consultant, the city paid an undisclosed amount of ransom to its attackers. ... Only a couple months earlier, the Town of Wasaga Beach in Ontario, faced the same issue and paid one bitcoin per server. It spent 11 Bitcoins, valued at the time at $144,000, to regain control of 11 servers. The town negotiated with the attackers to reduce the price to $35,000. After paying the ransom, Wasaga Beach assessed the damages to its city at $250,000 for loss of productivity and reputation. This scenario has become commonplace today. Cities, municipalities, and government agencies have all experienced ransom attacks. But ultimately taxpayers are the ones that pay the bill for these cyberattacks.


6 Ways To Set Expectations For Artificial Intelligence For Everyone In The Business

Artificial intelligence, in many ways, is a different animal from technology waves that have crashed through enterprises over the years. that have gone before. For starters, while some nontrivial investments need to be made to put AI in place -- including talent -- hard business results may not be so immediately apparent. Still, anyone who has spent time selling technology to the C-suite or board will recognize some of the challenges seen with AI, and apply some of that learning. With this in mind, Whit Andrews of Gartner has assembled a set of recommendations for selling AI to the business at large, along with an explanation of what's different this time around, and what's not. The challenge, he explains, is many AI approaches may take time to deliver to their full potential. This typically isn't how business units set their budget priorities. The challenge is to teach and encourage business leaders and end-users need to think big, and to think long-term when it comes to AI.


A new ARM-based server processor challenges for the data center

Ampere Computing, processor, data center, server
Ampere isn’t exactly starting from scratch. It acquired the X Gene Arm server processor business from MACOM, which in turn acquired that business from Applied Micro, which started out with its ARM server business back in 2011. The initial releases, the X-Gene 1 and X-Gene 2, weren’t terribly impressive; only eight cores running at 2.4 GHz. Work had begun on the X-Gene 3 chip, codename “Skylark,” but was not completed when the company got passed around. Once Ampere picked up the pieces, it ran with what it had. Ampere has given the X-Gene 3 chip, now known as eMAG, quite a boost. It has 32 cores running at 3.3GHz with L2 and L3 cache hierarchy, integrated SATA I/O ports and 42 lanes of PCI-Express 3.0 peripheral bandwidth across eight controllers. The chip also includes twice as many memory channels, eight per socket, which doubled the memory capacity up to 1TB per socket and doubled the bandwidth. Ampere also has a 16-core chip in the works.


Why Was Equifax So Stupid About Passwords?

In this day and age, there is no excuse for developers to be using live data in testing environments. Substituting fake but lookalike data isn't a new concept. Arguably, it dates from the heady "greeking" days of the 1500s, when printers and typesetters began using "lorem ipsum" - nonsensical Latin - as placeholder text. Enter the digital age: Developers need to ensure that when users enter a value into a 16-digit credit card field, for example, their application handles it correctly. But playing with live data in production environments increases the risk that insiders or outsiders who shouldn't be seeing the data might have access to it. That's why numerous development tools offer the ability to obfuscate and mask live data, as well as to generate "good enough" test data that developers can use instead. European IT market researcher Bloor Research notes that such tools are available from a variety of vendors, including CA, Compuware, Dataprof, Dataguise, Delphix, HPE, IBM, Imperva Camouflage, IMS Privacy Analytics, Informatica, Mentis, Net 2000, Protegrity and Solix.



Quote for the day:


"To work effectively as an agent of change in a pluralistic society, it is necessary to be able to connect with people different from oneself." -- Beverly Daniel Tatum