Daily Tech Digest - March 19, 2018

Linux Foundation unveils open source hypervisor for IoT products

The Linux Foundation recently unveiled ACRN (pronounced "acorn"), a new open source embedded reference hypervisor project that aims to make it easier for enterprise leaders to build an Internet of Things (IoT)-specific hypervisor. The project, further detailed in a press release, could help fast track enterprise IoT projects by giving developers a readily-available option for such an embedded hypervisor. It will also provide a reference framework for building a hypervisor that prioritizes real-time data and workload security in IoT projects, the release said. ACRN is made up of the hypervisor and its device model, the release noted. This is complete with I/O mediators. Firms like Intel, LG Electronics, Aptiv, and more have already contributed to the project. "ACRN's optimization for resource-constrained devices and focus on isolating safety-critical workloads and giving them priority make the project applicable across many IoT use cases," Jim Zemlin, executive director of The Linux Foundation, said in the release.

Java at a crossroads: Why the popular programming language needs to evolve to stay alive

Java is used most often in cloud computing, data science work, web development, and app development, said Karen Panetta, IEEE fellow and dean of graduate engineering at Tufts University. "I still see it evolving, and very popular," Panetta said. While languages such as Python are growing as well, Java is adapting to the increasing number of deep learning and machine learning workloads. "There's becoming a lot of libraries out there that are compatible for deep learning," Panetta said. "I think the fact that we keep talking about cloud computing and all of those things, that Java is still going to be the dominant player." Java also has built in more security options than Python, so it's a good option for Internet of Things (IoT) applications, Panetta said. Java has a foothold everywhere, and large user groups and libraries already written, making it a natural pathway for machine learning, Panetta said. "It's evolving to meet the needs," she added.

FPGA maker Xilinx aims range of software programmable chips at data centers

The first product range in the category is code-named Everest, due to tape out (have its design finished) this year and ship to customers next year, Xilinx announced Monday. Whether it’s an incremental evolution of current FPGAs or something more radical is tough to say since the company is unveiling an architectural model that leaves out many technical details, like precisely what sort of application and real-time processors the chips will use. The features that we do know about are consequential, though. Everest will incorporate a NOC (network-on-a-chip) as a standard feature, and use the CCIX (Cache Coherent Interconnect for Accelerators) interconnect fabric, neither of which appear in current FPGAs. Everest will offer hardware and software programmability, and stands to be one of the first integrated circuits on the market to use 7nm manufacturing process technology (in this case, TSMC’s). The smaller the manufacturing process technology, the greater the transistor density on processors, which leads to cost and performance efficiency.

Ethernet bandwidth costs fall to a six-year low

Cloud provider demand for more throughput increased last year's average bandwidth per switch port connection to almost 17 Gb from 12 Gb in 2016, according to the latest report from Crehan Research Inc., based in San Francisco. "Public, private and hybrid cloud providers are looking to deploy much faster networks within and between data centers in order to handle the myriad of new and existing applications that their customers need," Seamus Crehan, president of Crehan Research, said in a statement. "In turn, the data center switch vendors are responding by offering significantly more bandwidth at little or no additional cost." The net result in 2017 was impressive increases in Ethernet bandwidth, port shipments and revenue in the branded switch market, Crehan said. Revenue rose 10% -- the highest annual growth in four years. 

What CISOs must know about DFARS and NIST to be compliant

Privilege management and application control map to many of the different controls within the guidelines – and it’s hardly surprising given the proven effectiveness of the two security controls when combined with the visibility it provides. We know that privilege management allows admin rights to be applied to applications as needed – rather than giving the user too much access. Application control is the part that allows us to whitelist or blacklist an application from running at all. The good thing about these two technologies together is that they’re great a “bang-for-the-buck.” Between them, they overlap to address controls in access control, audit and accountability, configuration management, maintenance and system and information integrity. Compliance is crucial for CISOs because those who fail to comply will likely lose government contracts. Organizations that are able to demonstrate compliance at an early stage may be in a better position to secure additional wins.

IT’s Most Wanted: 16 Traits Of Indispensable IT Pros

IT’s most wanted: 16 traits of indispensable IT pros
Taking fresh looks at old problems is an essential part of the digital transformations that are changing many organization’s cultures, leading to approaches like DevOps and agile and incorporating emerging technical solutions such as AI and IoT, says Christoph ...  “There is one thing that IT staff cannot afford — and that’s to stand still,” Goldenstern says. “The willingness to learn and keep evolving, making yourself vulnerable in the process, is absolutely essential to staying relevant and being a growth driver in a constantly evolving business.” ... “In order to succeed in IT you need to have the ability to look at a problem, analyze it, and find a way to solve it,” Martini says. “I look for people who understand their strengths and weaknesses. These are the people that are most capable of learning on the job, while still improving the team’s ability overall. Look at raw potential. If you have potential and the drive to improve, the rest will follow.”

Disaster and Contingency Planning Lessons from the ICU

It’s inaccurate to accuse Memorial Hospital of not having a disaster plan. Theirs was 246 pages long. They had a designated disaster coordinator. What they didn’t have was a leader who had looked ahead multiple steps. They also failed to convert the generic pieces into living, breathing human beings whose survival hinged on what moves they made next. No one at Memorial knew that the surrounding levees would break after Katrina, isolating the hospital. That the generators, whose move to a higher floor had always fallen to a lower budget priority than some other need, would be incapacitated by flooding. That the presence of patients of a provider that was leasing the seventh floor would multiply the census of extremely ill patients exponentially. ...  That same physician was subsequently charged with second-degree murder. She was accused of choosing to euthanize the sickest patients without their consent. A comprehensive review of the situation indicates that, at a minimum, people in authority were scrambling to deal with their pieces of the game. No one was watching the whole board.

Android Oreo: 18 advanced tips and tricks

Android Oreo statue at Google
Got a notification you don't want to deal with immediately — but also don't want to forget? Use Oreo's super-handy (but also super-hidden!) snoozing feature: Simply slide a notification slightly to the left or right, then tap the clock icon that appears along its edge. That'll let you send it away for 15 minutes, 30 minutes, one hour, or two hours and then have it reappear as new when the time is right. ... Another new Oreo feature is the system-level ability for launchers to display dots on an app's home screen icon whenever that app has a notification pending — yes, much like the notification badges on iOS. Unlike iOS, though, Android already has an excellent system for viewing and managing notifications, which can make this addition feel rather redundant and distracting. But wait! Here's a little secret: You can disable the dots — if you know where to look. Mosey on back to the Apps & Notifications section of your system settings, then tap the line labeled "Notifications" and turn off the toggle next to "Allow notification dots."

Predictive maintenance: One of the industrial IoT’s big draws

Predictive maintenance: One of the industrial IoT’s big draws
CarForce is mostly focused on selling its product to garages, but Lora said that the potential beneficiaries are numerous. In the garage use case, mechanics can get real-time maintenance data from vehicles they service, which offers both the ability to warn customers of impending problems and to correlate large data sets together to help predict future reliability issues. It's a value-add because the garage can stay a step ahead of mechanical issues – an alert goes off, and the garage can contact the customer to schedule maintenance. Even an awareness that customer X might be coming in for an oil change on a given day can help with planning and scheduling. “If you look at the big data/AI path, step one is just seeing the data,” said Lora. It’s part of what she refers to as the “lilypad” approach to development – building one system to enable a leap to the next lilypad, and so on. CarForce plans to operate on a population level - predicting reliability and failures across big swaths of the automotive landscape.

The benefits of machine learning in network management

The problem with rule-based systems is they require maintenance and frequent updating as new rules are needed. It is often too cumbersome to create rules where numerous changes in the conditions require very different results. In addition, these systems are not very flexible. The rule sets may miss a problem if the rule set in question doesn't exactly match the problem's symptoms. It's much better to build a system that can learn about problems from the network experts who use it -- much like training a person who is new to the field of networking. Then, as new problems and solutions are found, the system would learn the symptoms and the resulting actions to take. Most of the industry agrees the integration of AI is among the benefits of machine learning. For our purposes, think of machine learning and deep learning as examples of neural network technology. A neural network is trained when it is fed a lot of data from the domain in question -- along with the appropriate answer or response. The neural network learns the appropriate response when presented with new data.

Quote for the day:

"My past has not defined me, destroyed me, deterred me, or defeated me, it has only strengthened me." -- Steve Maraboli

Daily Tech Digest - March 18, 2018

The Differences Between Machine Learning And Predictive Analytics

machine learning, predictive analytics
Machine learning applications can be highly complex, but one that’s both simple and very useful for business is a machine learning algorithm that compares employee satisfaction ratings to salaries. Instead of plotting a predictive satisfaction curve against salary figures for various employees, as predictive analytics would suggest, the algorithm assimilates huge amounts of random training data upon entry, and the prediction results are affected by any added training data to produce real-time accuracy and more helpful predictions. ... Predictive analytics can be defined as the procedure of condensing huge volumes of data into information that humans can understand and use. Basic descriptive analytic techniques include averages and counts. Descriptive analytics based on obtaining information from past events has evolved into predictive analytics, which attempts to predict the future based on historical data. This concept applies complex techniques of classical statistics, like regression and decision trees, to provide credible answers to queries

Creating With Cognitive

“Our IAIC initiatives were really born in response to our Clients. They said to us, ‘We’re intrigued by the concept of Cognitive Automation — but we don’t really know how it can affect our business’. We knew we had to create a safe place where companies could experiment combining process based automation solutions like Robotics Process Automation (RPA) and Business Process Management (BPM) with Autonomic and Cognitive Assets. Our process starts with an Education session to demystify different types of automation and arrive on common definitions. We then begin Ideating on how this technology could impact the overall organization and business processes. Here we take a hard look at User Experience, End to End Process Views and what the “art of the possible” could become in the future. Next, we move to a Strategy session, where we bring data scientists, business analysts, software engineers and developers together to re-imagine what the client — and their customers — really need to meet their expectations and begin applying the defined technologies to actual client use cases.

Data-Savvy Banking Organizations Will Destroy Everyone Else

“To win in today’s market and ensure future viability, it is essential that organizations capture value quickly, change direction at pace, and shape and deliver new products and services. Organizations also need to maximize the use of ‘always on’ intelligence to sense, predict and act on changing customer and market developments,” said Debbie Polishook, group chief executive, Accenture. The good news is that, despite what appears to be an ominous future, over 40% of executives see more opportunities than threats compared to two years ago. The key is to break down silos and leverage data and insights to support both internal and external business needs. Intelligent organizations have five essential ingredients that contribute to a lasting and impactful business process transformation. These essential provide the foundation for an agile, flexible and responsive organization that can act swiftly to market and consumer changes and be in a better position to succeed

Where NEXT for Tech Innovation in 2018?

Healthcare is an industry that is ripe for disruption. We will begin to see the power of IoT in healthcare with the emergence of inexpensive, continuous ways to capture and share our data, as well as derive insights that inform and empower patients. Moreover, wearable adoption will create a massive stream of real-time health data beyond the doctor’s office, which will significantly improve diagnosis, compliance and treatment. In short, a person’s trip to the doctor will start to look different – but for the right reasons. Samsung is using IoT and AI to improve efficiency in healthcare. Samsung NEXT has invested in startups in this area, such as Glooko which helps people with diabetes by uploading the patient’s glucose data to the cloud to make it easier to access and analyse them. Another noteworthy investment in this space from Samsung NEXT is HealthifyMe, an Indian company whose mobile app connects AI-enabled human coaches with people seeking diet and exercise advice.

Security Settles on Ethereum in First-of-a-Kind Blockchain Transaction

“It’s quite exciting that you can now leverage any clearing system, and it’s legally enforceable on even a public blockchain,” said Avtar Semha, founder and CEO of Nivaura, whose technology was used last year to issue an ethereum bond. Semha says it’s unclear with the note being issued Friday exactly how much will be saved on the overall cost of the transaction. But he added that in the ethereum bond last year the final cost was reduced from an estimated 40,000 pounds to about 50 pounds, “which is pretty awesome,” he said. Further, law firm Allen and Overy helped ensure the note was compliant, the Germany-based investment services firm Chartered Opus provided issuance services and Marex helped fix and execute the note within a “sandbox” created by the U.K. Financial Conduct Authority (FCA). As revealed for the first time to CoinDesk, on March 14, Nivaura also received full regulatory approval from the FCA that removed some restrictions and allows the company to operate commercially.

IoT and Data Analytics Predictions for 2018

With the increased collection of Big Data and necessity of advanced analytics, the year 2018 will witness high usage of cloud-based analytics software rather than on-premises software. Reports suggest that more than 50% of businesses will opt cloud-first strategy for their initiatives around big data analytics. AI will completely revolutionize the way the organizations work today. Enterprises will take full advantage of Machine Learning to optimize infrastructural behavior, transform business process, and improve the decision-making process. Gartner states that AI is just the starting of the 75-year technological cycle and it will utilize revenue for 30% of market-leading businesses. According to Gartner, natural language will play a dual role as a source of input for many business applications and for a variety of data visualizations. The operational transformation is necessary to adopt algorithmic business with DNNs (deep neural networks) in the year 2018. This will be a standardized component in the toolbox of more than 80% of data scientists.

Beyond Copy-Pasting Methods: Navigating Complexity

Why is agility a good idea? When you take Jeff Sutherland’s book, it says Scrum: Doing twice the work in half the time. At first sight that looks like a pure efficiency issue. But if we ask experienced agilists why they are doing agile, they usually come up with a list of challenges for which agile works better than other approaches: users only half understand their own requirements. Requirements change because the context changes. You have to build technological substitution into your design. Your solution is connected to many other things, and they all interact and change. But also inside the project you know there will be unpredictable problems and surprises. If we look behind the obvious: what is the common force that makes these challenges behave the way they do? The answer is complexity. Exploring complexity has a big advantage. Once we understand more about the complexity behind the problems which we are trying to solve with agile, we in fact clarify the purpose of our agile practice.

Building a new security architecture from the ground up

Overseeing an infrastructure that is operating thousands of servers is a burden on any architecture team. Moving those servers—all or in part—to the cloud takes patience and innovation. The innovation part, Fry said, is key because “most commercial security products are designed and built for specific use cases. Scale and complexity typically are not present,” meaning that architects in those situations need to adapt ready-built products to their networks or develop new tools from scratch, all of which takes time, money, and skill. Further, not all parts of the network can be treated equally; enterprise and customer-facing environments differ from test environments differ from production environments. When dealing with networks like those at Yahoo or Netflix, the need to think “outside the box” and innovate are, “not desirable; it’s a requirement,” said Fry. Though a security architect may be primarily concerned about security features and controls, the business is primarily concerned about availability and uptime.

Enterprises need a new security architecture: Graeme Beardsell

Today, data, applications, and users are outside the firewall and on the cloud, where they traverse the public Internet. To paraphrase, traditional security systems are guarding a largely empty castle. This means that enterprises need new approaches in their concept of security and to build new security architecture. Essentially, security should be designed to take advantage of the shape of the Internet and not try to defy it. The other challenge that is perhaps ‘invisible’ is the large number of vendors and solutions that each organization needs to manage. Analysts now advocate rationalizing multiple solutions by different vendors into suites of solutions by a single vendor to provide greater efficiency in productivity, and also towards solutions that share an integrated platform to facilitate data exchange and analysis.

10 Lessons Learned from 10 Years of Career in Software Testing

There is nothing wrong in getting certified but it’s not compulsory. A good tester needs to possess testing skills like sharp eye for details, analytical and troubleshooting skills etc. and I believe no certification can prove that you are good at those mentioned skills. While writing test cases, none of us would prefer to think about boundary value analysis and decision tables specifically. What one needs is application of common sense on knowledge. Who would like a person who indicates litter in your balcony and makes you sweep it? No matter if he is helping to make something clean, mostly he won’t be appreciated. That is how the profession is! You might or might not be appreciated for the quality improvement work you are doing but you need to understand importance of what you are doing. And on timely basis, you need to pat on your back for the work you are doing.

Quote for the day:

"Character is much easier kept than recovered." -- Thomas Paine

Daily Tech Digest - March 17, 2018

A Comparison Between Rust and Erlang

Erlang, being a high level, dynamic and functional language, provides lightweight processes, immutability, distribution with location transparency, message passing, supervision behaviors etc. Unfortunately, it is less than optimal at doing low-level stuff and is clearly not meant for that. ... Indeed, XML stanzas have to be read from the command line or from the network and anything coming from outside of the Erlang VM into is tedious to work with. You possibly know the odds. For this kind of use cases, one could be tempted to consider a different language. In particular, Rust has recently come to the foreground due to its hybrid feature set, which makes similar promises to Erlang’s in many aspects, with the added benefit of low level performance and safety. Rust compiles to binary and runs on your hardware directly just like your C/C++ program would do. How is it different from C/C++ then? A lot. According to its motto: “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety

What everyone gets wrong about touchscreen keyboards

closeup of retro typewriter key with the letter B for blog
They’re not thinking about technology they haven’t seen or other ways of working with a device they haven’t tried. Another reason for the opposition is that two-screen laptops aren’t new. We’ve seen the idea tried in the past ten years in the form of Canova’s Dual-Screen Laptop, the Acer Iconia 6120 Dual Touchscreen Laptop, the Toshiba libretto W105-L251 7-Inch Dual Touchscreen Laptop and others. These devices were unpleasant to use and were rejected by laptop buyers. Future two-screen laptops will be the opposite. Here are five reasons why you’ll love two-screen laptops. ... Apple’s patents show an iTunes and Apple Music interface that replaces the on-screen keyboard with music controls, such as an equalizer, when one of these applications is running. It’s easy also to imagine what kind of interfaces third-party developers could build: turntables for DJs, drawing pads for illustrators, advanced calculator keyboards for eggheads, speech notes for business presentations and game-specific game controls for games.

Don't Drown Yourself With Big Data: Hadoop May Be Your Lifeline

Just how "big" is Big Data? According to IBM, 2.5 quintillion bytes of data are created every day, and 90 percent of all the data in the world was created in the last two years. Realizing the value of this huge information store requires data-analysis tools that are sophisticated enough, cheap enough, and easy enough for companies of all sizes to use. Many organizations continue to consider their proprietary data too important a resource to store and process off premises. However, cloud services now offer security and availability equivalent to that available for in-house systems. By accessing their databases in the cloud, companies also realize the benefits of affordable and scalable cloud architectures. The Morpheus database-as-a-service offers the security, high availability, and scalability organizations require for their data-intelligence operations. Performance is maximized through Morpheus's use of 100-percent bare-metal SSD hosting.

How business intelligence in banking is shifting the paradigm

FinTech - Financial Technology - dollar sign, circuits, data
Banking has always been a competitive environment, even before the digitization of the industry acquired its present pace. Thanks to financial technology, the competition has become even tougher. Fintech companies to banks are what Uber is to taxis. And, as we know, taxi drivers aren’t happy about Uber. Apart from having their profits endangered by fintech companies, banks also experience extreme pressure from regulators. After the 2008 crisis, regulatory agencies, such as FRB, OCC and FDIC, are carefully watching banks. And while most of the banks didn’t participate in activities that led to the crisis, all of them have to follow strict compliance rules adopted after the market crash. Competitive business intelligence solutions for banking have to reflect all these requirements. They have to be flexible and transparent to adapt to competition and regulatory environment. They have to be scalable to keep up with the growing digitization of the industry, as more clients are starting to forget the last time they visited the bank physically.

4 steps to implementing high-performance computing for big data processing

The message for company CIOs is clear: if you can avoid HPC and just use Hadoop for your analytics, do it. It is cheaper, easier for your staff to run, and might even be able to run in the cloud, where someone else (like a third party vendor) can run it. Unfortunately, being an all-Hadoop shop is not possible for the many life sciences, weather, pharmaceutical, mining, medical, government, and academic companies and institutions that require HPC for processing. Because file size is large and processing needs are extreme, standard network communications, or connecting with the cloud, aren't alternatives, either. In short, HPC is a perfect example of a big data platform that is best run in-house in a data center. Because of this, the challenge becomes—how do you (and your staff) assure that the very expensive hardware you invest in is the best shape to do the job you need it to do?

ONF puts focus on white box switches with Stratum project

ONF intends to make Stratum available on a broad selection of networking silicon and white box switches. Stratum will also work with existing deployed systems, as well as future versions of programmable silicon. Stratum uses recently released SDN interfaces and doesn't embed control protocols. Instead, it's designed to support external network operating systems or embedded switch protocols, like Border Gateway Protocol. In this way, ONF said the Stratum project will be more versatile and available for a broader set of use cases. Founding members of the Stratum project include Big Switch Networks, VMware, Barefoot Networks, Broadcom and Google -- which donated production code to initiate the project for open white box switches. "Google has contributed the latest and greatest, and just because it's Google [its participation in the project] makes it reasonably significant," Doyle said.

Wave Computing close to unveiling its first AI system

"A bunch of companies will have TPU knock-offs, but that's not what we do--this was a multi-year, multi millions of dollars effort to develop a completely new architecture," CEO Derek Meyer said in an interview. "Some of the results are just truly amazing." With the exception of Google's TPUs, the vast majority of training is currently done on standard Xeon servers using Nvidia GPUs for acceleration. Wave's dataflow architecture is different. The Dataflow Processing Unit (DPU) does not need a host CPU and consists of thousands of tiny, self-timed processing elements designed for the 8-bit integer operations commonly used in neural networks. Last week, the company announced that it will be using 64-bit MIPS cores in future designs, but this really for housekeeping chores. The first-generation Wave board already uses an Andes N9 32-bit microcontroller for these tasks, so MIPS64 will be an upgrade that will give the system agent the same 64-bit address space as the DPU as well as support for multi-threading so tasks can run on their own logical processors.

How to use Linux file manager to connect to an sftp server

The sftp command is quite easy. Open up a terminal window and log in with the command ssh USERNAME@IPADDRESS (Where USERNAME is the actual remote username and IPADDRESS is the address of the remote machine). Once logged in, you can then download files onto your local machine with the command get FILENAME (Where FILENAME is the name of the file). You can upload files with the command put FILENAME (Where FILENAME is the name of the file). But what if you don't want to work with the command line? Maybe you find the GUI a more efficient tool. If that's you, you're in luck, as most Linux file managers all have built-in support for SSH and its included tools. With that in mind, you can enjoy a GUI sftp experience, without having to install a third-party solution. As you might expect, this is quite easy to pull off. I'm going to demonstrate how to connect to a remote Ubuntu 16.04 server, via the sftp protocol, using both Elementary OS Pantheon Files and GNOME Files.

Deep Feature Synthesis Is the Future of Machine Learning

When data is conceptualized properly, sophisticated AI algorithms can make the most ingenious observations. Algorithms that have access to the right type of data may seem virtually omniscient. Unfortunately, real-world inputs can’t always be easily processed as the type of data that these algorithms depend on. At its core, machine learning depends on numerical data. Unfortunately, some qualitative data is not easily converted into a usable format. As human beings, we have one advantage over the AI algorithms that we sometimes expect to inevitably replace us. We understand the nuances of variables that aren’t easily broken down into strings of thousands of zeros and ones. The artificial intelligence solutions that we praise have yet to grasp this concept. The binary language that drives artificial intelligence has not changed in over half a century since it was first conceived.

4 key steps to building a comprehensive data strategy

As chief data officers and data scientists play more prominent roles in developing data strategies in many enterprises, we see organizations struggling to contend with these challenges and taking a shortsighted ‘save it now, worry about it later’ approach. These situations are worsening as data becomes more active and distributed across an enterprise, with many groups and individuals implementing unique and/or customized data management and storage solutions that often begin as unmanaged ‘aaS’ (as a service) projects and evolve into critical production systems with challenging data governance, security, access and cost management dynamics. Organizations that invest in developing and implementing a strategic data plan are fundamentally better prepared to anticipate, manage and capitalize on the increasing challenges and possibilities of data.

Quote for the day:

"There are things known and there are things unknown, and in between are the doors of perception." -- Aldous Huxley

Daily Tech Digest - March 16, 2018

The future of IT: Snapshot of a modern multi-cloud data center

Multi-cloud Data Centers Are Emerging as a Hedge Against the Major Commercial Clouds
The idea of cloud computing remains simplicity itself, which is a key element of its appeal: Move the cost and complexity of procuring, provisioning, operating, and supporting an endless array of hardware, software, and enabling services for your business out to a 3rd party, which does it all it for you, yet more securely and with much greater economies of scale. Writ large across virtually all industries, a comprehensive shift to the cloud thus continues to be a top objective of CIOs in many organizations this year. Even the objective, despite misgivings that we're really just going back to the monolithic IT vendor world again. Not surprisingly, enabling such a strategic move is also the top business goal of the leading commercial cloud vendors, namely Amazon, Microsoft, and Google, who continue to vie vigorously for marketshare, technical leadership, and -- some would say -- the most interesting and valuable part of the market itself ... Hosting companies like Rackspace and others used to be able to provide a hedge that IT departments could use for such purposes, through services like co-location. However, most such providers have not been able to keep up with the overall capacity race or compete in the bruising cost efficiency battles that the top cloud providers can afford to wage.

The Containerization of Artificial Intelligence

While AI is more hype than reality today, machine intelligence — also referred to as predictive machine learning — driven by a meta-analysis of large data sets that uses correlations and statistics, provides practical measures to reduce the need for human interference in policy decision-making. A typical by-product of such application is the creation of models of behavior that can be shared across policy stores for baselining or policy modifications. ... Adoption of AI can be disruptive to organizational processes and must sometimes be weighed in the context of dismantling analytics and rule-based models. The application of AI must be constructed on the principle of shared security responsibility; based on this model, both technologists and organizational leaders will accept joint responsibility for securing the data and corporate assets because security is no longer strictly the domain of specialists and affects both operational and business fundamentals.

AI & Blockchain: 3 Major Benefits Of Combining These Two Mega-Trends

AI, as the term is most often used today is, simply put, the theory and practice of building machines capable of performing tasks that seem to require intelligence. Currently, cutting-edge technologies striving to make this a reality include machine learning, artificial neural networks and deep learning. Meanwhile, blockchain is essentially a new filing system for digital information, which stores data in an encrypted, distributed ledger format. Because data is encrypted and distributed across many different computers, it enables the creation of tamper-proof, highly robust databases which can be read and updated only by those with permission. Although much has been written from an academic perspective on the potential of combining these ground-breaking technologies, real world applications are sparse at the moment. However, I expect this situation to change in the near future. So here are three ways in which AI and blockchain are made for each other.

Cyber criminals using complex financial system, study shows

The findings on cyber criminal money-laundering and cashing-out methods are part of a study into the macro economics of cyber crime and how the various elements link together which has been led by Michael McGuire, senior lecturer in criminology at Surrey University. “This is the first time the financial flows of cyber criminals have been put together into a composite picture,” said McGuire, who will present the full findings of the nine-month Web of profit study at the RSA Conference in San Francisco from 17-19 in April. “Law enforcement and cyber security professionals can use the study to understand how revenue generation is feeding into laundering, and how laundering is feeding into more traditional methods of money-laundering and the way cyber criminals are spending their money, so that they look at the intersections between the various networks more carefully,” he told Computer Weekly.

To OSPF Or Not? Which Routing Protocol To Use

OSPF with a multipoint MAN is a classic DR/BDR LAN situation, reducing the amount of peer-to-peer flooding. I haven’t run into this at large scale in a design setting yet. Would having such a MAN provide a pretty good reason to run OSPF overall? How would one damp instability in such a network? Large failure domain? What number of peers is “too big” for a full mesh MAN? The other problem I’m still mulling over is the OSPF WAN to dual datacenters design. In one case, a customer was running more than 250 VLANs (one per area) over DWDM, and more recently over OTV between datacenters, with more than 4000 GRE over IPsec tunnels. Dual hub DMVPN and BGP route reflectors looks very attractive compared to that. “Totally stubby EIGRP” — hubs that advertise only 0/0 or corporate default to remote sites — could also work well. By the way, if you are using EIGRP, note Cisco’s clever recent stub-site feature, which was probably built to simplify IWAN.

5 Applications Of Smart Contracts

Due to a lack of automated administration, it can take months for an insurance claim to be processed and paid. This is as problematic for insurance companies as it is for their customers, leading to admin costs, gluts, and inefficiency. Smart contracts can simplify and streamline the process by automatically triggering a claim when certain events occur. For example, if you lived in an area that was hit by a natural disaster and your house sustained damage, the smart contract would recognise this and begin the claim. Specific details (such as the extent of damage) could be recorded on the blockchain in order to determine the exact amount of compensation. ... The terms of a mortgage agreement, for example, are based on an assessment of the mortgagee’s income, outgoings, credit score and other circumstances. The need to carry out these checks, often through third parties, can make the process lengthy and complicated for both the lender and the mortgagee. Cut out the middle men, however, and parties could deal directly with each other (as well as access all the relevant details in one location).

China wants to shape the global future of artificial intelligence

“[The Chinese government] sees standardization not only as a way to provide competitiveness for their companies, but also as a way to go from being a follower to setting the pace,” says Jeffrey Ding, a student at Oxford University’s Future of Humanity Institute who studies China’s nascent AI industry, and who translated the report. The government’s plan cites the way US standards bodies have influenced the development of the internet, expressing a desire to avoid having the same thing happen with AI. China’s booming AI industry and massive government investment in the technology have raised fears in the US and elsewhere that the nation will overtake international rivals in a fundamentally important technology. In truth, it may be possible for both the US and the Chinese economies to benefit from AI. But there may be more rivalry when it comes to influencing the spread of the technology worldwide. “I think this is the first technology area where China has a real chance to set the rules of the game,” says Ding.

Open Source & Smart Mobility In The Transportation Industry

Open source projects in the big data space move their development and feature sets along quickly to harness the latest enhancements in technology, performance, and scalability. New best practices get baked into data platform solutions very quickly, and a huge community of data scientists, scripters, and programmers all works toward the same goal, making best-of-breed technology available to anyone. At the foundational level, innovation occurs so rapidly that it is unrealistic to expect a vendor to encapsulate all these new developments in anything but a proprietary solution layered on top. Selecting an open source platform for data projects removes any risk of vendor lock-in. When it comes to the data space, like most things, putting all your eggs in one basket is inadvisable. Much of the innovation that is occurring in the open source data space is directly attributable to the best and brightest minds’ aversion to being tied down to a single vendor, making a shared effort much more attractive.

Transforming Bank Compliance with Smart Technologies

Digitization, the final stage in the transformation process, has the potential to create a step change in compliance operations. The catalyst is the emergence of smart technologies, which offer significant performance improvements and the ability to mimic human capabilities such as learning, language use, and decision making. Smart technologies have multiple potential applications in the context of compliance, from support for relatively routine tasks in client onboarding to analysis of unstructured data sets—for example, in relation to money laundering. Across the board, these technologies offer a route to significant efficiency gains and can help employees work more effectively. The starting point in building a cutting-edge compliance framework is to establish a taxonomy that describes and classifies key areas of risk. Such a taxonomy is also a prerequisite for defining the scope of a target operating model. The six most relevant types of compliance risks relate to financial crime and conduct.

IBM sets up no-fee Data Science Elite team to speed up AI adoption

Big Blue is calling the latter “Cloud Private for Data”, based on an in-memory database. It adds up to a platform for doing data science, data engineering and app building. IBM said the aim is to “enable users to build and exploit event-driven applications capable of analysing data from things like IoT [internet of things] sensors, online commerce, mobile devices, and more”. ... IBM is also announcing a “Data Science Elite Team”, described as a “no-charge consultancy dedicated to solving clients’ real-world data science problems and assisting them in their journey to AI”. Patricia Maqetuka, chief data officer at South African bank Nedbank, has used the team. She said: “Nedbank has a long tradition of using analytics on internal, structured data. Thanks to IBM Analytics University Live, we were exposed to the guidance and counsel of IBM’s Elite team. This team helped us to unlock new paradigms for how we think about our analytics and change the way we look at use cases to unlock business value.”

Quote for the day:

"Don't waste words on people who deserve your silence. Sometimes the most powerful thing you can say is nothing at all." -- Joubert Botha

Daily Tech Digest - March 15, 2018

How Valuable Is Your Company's Data?

Image: iStock
"As best as I can tell, there's no manual on how to value data but there are indirect methods. For example, if you're doing deep learning and you need labeled training data, you might go to a company like CrowdFlower and they'd create the labeled dataset and then you'd get some idea of how much that type of data is worth," said Ben Lorica, chief data officer at O'Reilly Media. "The other thing to look at is the valuation of startups that are valued highly because of their data." Observation can be especially misleading for those who fail to consider the differences between their organization and the organizations they're observing. The business models may differ, the audiences may differ, and the amount of data the organization has and the usefulness of that data may differ. Yet, a common mistake is to assume that because Facebook or Amazon did something, what they did is a generally-applicable template for success. However, there's no one magic formula for valuing data because not all data is equally valuable, usable or available.

Let Your Data Scientists Be Human

Humans are better at common sense than computers, instantly recognizing when a decision doesn’t make sense. This does not mean that humans are obsolete. Humans are stronger at communication and engagement, context and general knowledge, creativity, and empathy. When I have a frustrating problem, I want to talk to a human — someone who will understand my exasperation, listen to my experience, and make me feel valued as a customer while also solving my problem for me. Humans are better at common sense than computers, instantly recognizing when a decision doesn’t make sense. And humans can be creative. I recently heard music composed by a computer, and I’m sure that song won’t make it into the Top 40! Traditionally, businesses have hired data scientists who manually designed and built algorithms. The data scientists spent much of their time writing code and applying mathematics and statistics. Data scientists had no time to be human.

Yet Another Hospital Gets Extorted by Cybercriminals

Yet Another Hospital Gets Extorted By Cybercriminals.jpg
Unlike the vast majority of ransomware attacks, the Hancock attack was not the byproduct of a successful phishing campaign. “The [hacking] group obtained the login credentials of a vendor that provides hardware for one of the critical information systems used by the hospital,” explains Hancock Health President and CEO Steve Long. “Utilizing these compromised credentials, the hackers targeted a server located in the emergency IT backup facility utilized by the hospital...” Since they’d made a practice of regularly backing up all of their critical files, Hancock administrators initially believed that they would be able to purge the compromised files and replace them with clean backup versions. Unfortunately, it turned out that the “electronic tunnel” between the backup site and the hospital had been intentionally blocked. Several days later, administrators discovered that “the core components of the backup files from many other systems had been purposefully and permanently corrupted by the hackers.”

Is this the dawn of the robot CEO as artificial intelligence progresses?

Robot CEO
A human CEO can be corrupted by outside influence, but generally they have the freedom to make up their own minds and will face life-changing consequences should their impropriety be discovered. Robot CEOs on the other hand, could be completely ‘brain-washed’ by cybercriminals. For all of their incisive decision making and their unfaltering commitment to the company’s balance sheets, board and shareholders, a robot CEO could effectively ruin a company in seconds, or – if obfuscation is the game – quietly skim the company of profits in a ‘death by a thousand cuts’ approach. Kaspersky Lab researchers think the idea of robot CEOs is intriguing, but has some very real concerns about a future where robots are given too much responsibility. Cybercriminals go where the money is. That means if the robot stands between them and the possibility of substantial financial gain, they’ll find a way to exploit it. It’s always a cat and mouse game in cybersecurity. We come up with a defence; they find a way around it. It would be no different for a robot CEO.

Can The CIO Be The CDO?

The SAP Digital Transformation Executive Study indicates that successful companies must combine the best of these modes, resulting in what is effectively a “bimodal” approach to driving innovation. Our findings suggest that 72% of digital transformation leaders see a bimodal architecture as key to maintaining their core processes while quickly implementing next-generation technology. For better or worse, CIOs are traditionally associated with mode 1 – keeping the company running efficiently and effectively, at the lowest cost and least disruption. (No wonder they reigned during the era of deploying ERP systems.) It’s mission-critical, but it’s also the less glamorous side of IT today. In contrast, CDOs are all about disruption and digital transformation – the “mode 2” initiatives: driving new sources of revenue generation and using data to improve the customer experience. According to the SAP study, mode 2 initiatives fall into the category of “core business goals” for 96% of the Top 100 leaders in digital transformation, compared with 61% of laggards.

Voice-Operated Devices, Enterprise Security & the 'Big Truck' Attack

Biometric authentication, for one, doesn't solve the problem. In theory, Alexa could learn to identify authorized people's voices and listen only to the commands they give. But while this seems like a possible solution, the opposite is actually true. To begin with, there is an inherent trade-off between usability and security. Implementing such a system means that users would have to go through an onboarding process to teach Alexa or any other voice-enabled device how they sound. Compared to the status quo, where Alexa works out of the box, we are talking about a serious degradation in user comfortability. Biometric identification also means false positives: if your voice sounds different because you are sick, sleepy, or eating, Alexa will probably not accept you as an authorized user. And this is not all — there are systems available (like this example of Adobe VoCo) that, by using a person's voice sample saying one thing, can generate a new sample of his voice saying another thing.

With auditability, deep learning could revolutionise insurance industry

Given the low level of risk involved, said Natusch, “correlation is sufficient to drive action”. For applications such as Google’s photo identification, neural network-based algorithms are sufficient, he said. But in a risk-averse use case, such as decision-making in healthcare, people need to understand why a given decision was made, he said. This requires a causation-based approach, more suited to probabilistic graphical models, he added. Speaking of his experiences at Prudential, Natusch said: “We need two models – one to understand historical data, and something for handwriting recognition.” Discussing how handwriting recognition could streamline claims processing at the insurer, Natusch said that once a paper claims form is scanned, it ends up as a grayscale image. This is effectively a set of numbers that can be analysed using a neural network.

Innovation in Retail Banking 2017

The level of investment in both digitalization and innovation has increased in lockstep with each other as a result. The report illustrates the varying priorities of organizations of different sizes and the challenges and opportunities in the marketplace. More than ever, it is clear that having a defined innovation business model, with the application of data and advanced insights, is an imperative for success. It is also clear that being a ‘fast follower’ is not a viable strategy. We would like to thank Efma and Infosys Finacle for their partnership and their sponsoring of the 9th annual Innovation in Retail Banking research report. Their partnership has enabled us to create the most robust benchmarking of digitalization and innovation in banking, and to better understand the impact across all components of the financial services ecosystem.

How to train and deploy deep learning at scale

There was no deep learning in Spark MLlib at the time. We were trying to figure out how to perform distributed training of deep learning in Spark. Before actually getting our hands really dirty and trying to actually implement anything we wanted to just do some back-of-the-envelope calculations to see what speed-ups you could hope to get. ... The two main ingredients here are just computation and communication. ... We wanted to understand this landscape of distributed training, and, using Paleo, we've been able to get a good sense of this landscape without actually running experiments. The intuition is simple. The idea is that if we're very careful in our bookkeeping, we can write down the full set of computational operations that are required for a particular neural network architecture when it's performing training.

Outbrain Outgrows Initial Big Data Infrastructure, Migrates

"If a researcher or algorithm developer wants to introduce something new, we want to say 'let's try it, let's go for it,' " Yaron said. But it had become too difficult to do that with the 5-year-old Hadoop cluster which was now 330 nodes. "It resulted in bad performance for our algorithms when the Hadoop cluster wasn't stable enough," Yaron said. "At some point it became a source of frustration." Yaron's team decided to rebuild its Hadoop infrastructure with new physical servers and standardizing on a MapR implementation of Hadoop. "We are a company that runs a lot of open source, and we try to contribute to the open source community as well," Yaron told me. "However, there are cases where we feel there is value to enterprise technologies." The new physical servers changed the ratio between disk space, RAM, and CPU, Yaron said, and hardware and software upgrade enabled Outbrain to reduce the footprint of Hadoop servers in the data center to one-third of what it had been before.

Quote for the day:

"More people would learn from their mistakes if they weren't so busy denying them." -- Harold J. Smith

Daily Tech Digest - 14th March 2018

How to build an intelligent supply chain

Intelligent supply chains boast the ability to perform continuous predictive analytics on enormous amounts of data and use machine learning models to review historical information to plan for current and future needs. To create such a system, the first thing a company needs is "an intelligent brain, a cognitive operating system," said Frederic Laluyaux, president and CEO of Aera Technology, a platform dedicated to building the self-driving enterprise. Companies already have the needed data from their transactional systems. "The brain does the job, and the data feeds the brain." The cognitive operating system provides the computing connectivity. Laluyaux said that Aera’s system crawls transactional systems, like Google crawls websites. In some cases, their system has to crawl 54 different ERPs at one company, which is not unusual. "Every big company that has gone through mergers and acquisitions will have the same complexity," Laluyaux said. Even if the company has standardized their ERPs, they have many different modules.

Raspberry Pi 3 Model B+ arrives

TechRepublic's Nick Heath got an early hands-on with the Raspberry Pi 3 Model B+ and found in benchmarking tests that it's the fastest Raspberry Pi model available, both in single-core and quad-core measurements. As Heath notes, the addition of 802.11ac Wi-Fi gives the Raspberry Pi 3 Model B+ triple the maximum throughput of the Pi 3 Model B's 2.4GHz 802.11n Wi-Fi. Eben Upton, co-creator of the popular developer board, told TechRepublic that its B+ releases are all about refinement. "It's not a Raspberry Pi 4. I think what we've ended up with is something which is a really good B+, a bit too good for a B+, but that would be not really anywhere near enough for a Raspberry Pi 4," said Upton. "The B+ is our attention-to-detail spin for the product, where we take all the stuff we've learned in the past couple of years and smoosh it all together and make a new thing that addresses some of the concerns, or takes advantage of some of the opportunities that have come along in the intervening time."

What is security’s role in digital transformation?

futuristic user interface - digital transformation
McQuire admits that companies do struggle to keep up with the technological progress. “Many firms are simply unable to keep up with the rapid technology changes. The threat landscape is transforming before our eyes with malware, ransomware, and phishing attacks all rising rapidly,” he says. “There is also significant regulatory change occurring in the form of GDPR, which adds new pressures and holds those with weak security and privacy processes financially accountable.” “You combine this with a general lack of security talent in most firms and the fact that most run a complex web of legacy security technologies that don’t properly protect them from employees who now access work information across a mix of devices and cloud apps, and you have a security market that is booming,” McQuire adds. ...  CISOs, it appears, are trying to be present throughout the entire DX process. For instance, at an event late last year, Los Angeles CISO Timothy Lee said that CISOs that embrace digital transformation may help an organization adapt to a rapidly evolving global marketplace.

What CISOs Should Know About Quantum Computing

Quantum computing is quickly moving from the theoretical world to reality. Last week Google researchers revealed a new quantum computing processor that the company says may beat the best benchmark in computing power set by today's most advanced supercomputers. That's bad news for CISOs because most experts agree that once quantum computing advances far enough and spreads wide enough, today's encryption measures are toast. General consensus is that within about a decade, quantum computers will be able to brute-force public key encryption as it is designed today. Here are some key things to know and consider about this next generation of computing and its ultimate impact on security.

Support Always-On Demands With Software-Defined Resiliency and DRaaS

Software-defined resiliency (SDR) is IBM’s approach to DRaaS. It helps ensure enterprise applications operate reliably and protect data even when disaster strikes. It’s the latest step in the journey to redefine data center operations in software. A software-defined approach makes disaster recovery more controllable and visible, enabling administrators to extend across hybrid cloud infrastructures. It also introduces perhaps the most valuable feature in an otherwise laborious process: orchestrated recovery. By automating and orchestrating the replication and recovery of not just the servers and virtual machines but also the applications and business services, disaster recovery becomes reliable and repeatable. SDR makes use of existing vendors’ data protection mechanisms like replication and backup, but also manages them. Instead of using different tools for each enterprise software product, it provides a single interface to control all replication and recovery processes.

A startup is pitching a mind-uploading service that is “100 percent fatal”

Brain uploading will be familiar to readers of Ray Kurzweil’s books or other futurist literature. You may already be convinced that immortality as a computer program is definitely going to be a thing. Or you may think transhumanism, the umbrella term for such ideas, is just high-tech religion preying on people’s fear of death. Either way, you should pay attention to Nectome. The company has won a large federal grant and is collaborating with Edward Boyden, a top neuroscientist at MIT, and its technique just claimed an $80,000 science prize for preserving a pig’s brain so well that every synapse inside it could be seen with an electron microscope. McIntyre, a computer scientist, and his cofounder Michael McCanna have been following the tech entrepreneur’s handbook with ghoulish alacrity. “The user experience will be identical to physician-assisted suicide,” he says. “Product-market fit is people believing that it works.” Nectome’s storage service is not yet for sale and may not be for several years. Also still lacking is evidence that memories can be found in dead tissue.

Serverless computing, containers see triple-digit quarterly growth among cloud users

There has also been huge growth in adoption of serverless computing among cloud users. In the fourth quarter of 2017, serverless adoption grew by 667 percent among the sites tracked, the survey's authors report. This is up from 321 percent just the quarter before. "Serverless continues to be attractive to organizations since it doesn't require management of the infrastructure," the report's authors observe. "As companies migrate increasingly to the cloud and continue to build cloud-native architectures, we think the pace of serverless adoption will also continue to grow." The study's authors also looked at cloud CPU consumption to draw conclusions about how people are deploying cloud power. The dominance of general-purpose workloads (employed 43 percent of the time) shows that most organizations start their cloud journey by moving development and test workloads, mostly provisioned as standard instances.

Avoiding security event information overload

abstract data stream
The best SIEM vendor you can pick is one that understands that less is more. The Herjavec Group is one such company that recently caught my eye. Started by Robert Herjavec, one of the stars of ABC’s addictive Shark Tank television series, the Herjavec Group lives this philosophy. Here’s what Ira Goldstein, Herjavec Group’s senior vice president of global technical operations, said about their less-is-more philosophy, “[The data required to manage security for a modern enterprise infrastructure] has to be parsed, correlated, alerted, evaluated, analyzed, investigated, escalated, and remediated fast enough to protect integrity and operations. The only way to make sense of it all is to focus on fewer, more specific use cases that matter, as opposed to a high volume of low fidelity alerts.” “An effective security operation is driven by discipline, preventing use-case sprawl that causes information overload,” says Goldstein.

Reading very BIG text files using PowerShell

I recently ran into this very same problem in which the bcp error message stated: An error occurred while processing the file “D:\MyBigFeedFile.txt” on data row 123798766555678. Given that the text file was far in excess of notepad++ limits (or even easy human interaction usability limits), I decided not to waste my time trying to find an editor that could cope and simply looked for a way to pull out a batch of lines from within the file programmatically to compare the good rows with the bad. I turned to PowerShell and came across the Get-Content cmdlet to read the file, which looked like it would give me at least part of what I wanted. At this point, there are quite a few ways to look at specific lines in the file, but in order to look at the line in question and its surrounding lines I could only come up with one perfect solution. By passing the pipeline output into the Select-Object cmdlet I could use its rich skipand next functionality.

How to Interpret the SEC’s Latest Guidance on Data Breach Disclosure

Specifically, corporate officers, directors and “other corporate insiders” are prohibited from trading shares if they have knowledge of any unpublicized security incident within the company. While the overall intent of this latest statement is clear, the guidance is vague in key areas by design. For instance, the second section of the guidance emphasizes that companies must make "timely disclosure of any related material nonpublic information." It’s unclear what the SEC explicitly means by "timely disclosure," as the SEC doesn’t provide a specific time limit that companies must meet. This puts a lot of trust in corporate leaders to put speedy remediation and due diligence at the center of their security policy, which is a bit of a gamble given the track record of executive action during the fallout of the Equifax breach. The GDPR, on the other hand, is much more prescriptive, giving organizations 72 hours to report an incident related to the personal data of EU citizenry.

Quote for the day:

"Change is not a threat, it's an opportunity. Survival is not the goal, transformative success is." -- Seth Godin