Daily Tech Digest - November 17, 2018

newyorkdeepmasterprints.jpg
The researchers from New York University detail in a new paper how they used a neural network to create 'DeepMasterPrints', or realistic synthetic fingerprints that have the same ridges visible when rolling an ink-covered fingertip on paper. The attack is designed to exploit systems that match only a portion of the fingerprint, like the readers used to control access to many smartphones. The aim is to generate fingerprint-like images that match multiple identities to spoof one identity in a single attempt. DeepMasterPrints are an improvement on the MasterPrints the researchers developed last year, which relied on modifying details from already captured fingerprint images used by a fingerprint scanner for matching purposes. The previous method was able to mimic the images stored in the file, but couldn't create a realistic fingerprint image from scratch. The researchers tested DeepMasterPrints against the NIST's ink-captured fingerprint dataset and another dataset captured from sensors.


The strategy of treating containers as logically identical units that can be replaced, spun up, and moved around without much thought works really well for stateless services but is the opposite of how you want to manage distributed stateful services and databases. First, stateful instances are not trivially replaceable since each one has its own state which needs to be taken into account. Second, deployment of stateful replicas often requires coordination among replicas—things like bootstrap dependency order, version upgrades, schema changes, and more. Third, replication takes time, and the machines which the replication is done from will be under a heavier load than usual, so if you spin up a new replica under load, you may actually bring down the entire database or service. One way around this problem—which has its own problems—is to delegate the state management to a cloud service or database outside of your Kubernetes cluster. That said, if we want to manage all of your infrastructure in a uniform fashion using Kubernetes then what do we do?


A data lake is where vast amounts of raw data or data in its native format is stored, unlike a data warehouse which stores data in files or folders (a hierarchical structure). Data lakes provide unlimited space to store data, unrestricted file size and a number of different ways to access data, as well as providing the tools necessary for analysing, querying and processing. In a data lake each data item is assigned with a unique identifier and metadata tags. In this way the data lake can be queried for relevant data and that smaller set of relevant data can be analysed. Also, data can also be stored in data lakes before being curated and moved to a data warehouse. ... The Azure Data Lake is a Hadoop File System (HDFS) and enables Microsoft services such as Azure HDInsight, Revolution-R Enterprise, industry Hadoop distributions like Hortonworks and Cloudera all to connect to it. Azure Data Lake has all Azure Active Directory features including Multi-Factor Authentication, conditional access, role-based access control, application usage monitoring, security monitoring and alerting.


Harvard researchers want to school Congress about AI

Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society. Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems. In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable.  “The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund.


Role of digitisation and technologies like AI & ML in digital transformation of SMEs?


More specifically, AI-based solutions like automation can be greatly beneficial to SMEs in reducing several processes like sales planning, managing finances and supply chain, marketing, etc. These processes which most SMEs still conduct through offline methods considerably reduce the efficiency of the enterprise, since the managers’ focus is largely on the operations, rather than on serving customers and retaining them. Simultaneously, digitised business management and enterprise mobility solutions can enable SMEs to expand their business to any region within the country or outside, without having to worry about the infrastructural and monetary challenges associated. Customised, enterprise-centric solutions with AI and Machine Learning Every organisation faces a different set of issues and challenges. The solutions, then, to effectively tackle these challenges should also be specific to the business segment, as well as the industry, which the enterprise is involved in.


What Edge Computing Means for Infrastructure and OperationsLeaders

Edge computing solutions can take many forms. They can be mobile in a vehicle or smartphone, for example. Alternatively, they can be static — such as when part of a building management solution, manufacturing plant or offshore oil rig. Or they can be a mixture of the two, such as in hospitals or other medical settings. The capabilities of edge computing solutions range from basic event filtering to complex-event processing or batch processing. “A wearable health monitor is an example of a basic edge solution. It can locally analyze data like heart rate or sleep patterns and provide recommendations without a frequent need to connect to the cloud,” says Rao. More complex edge computing solutions can act as gateways. In a vehicle, for example, an edge solution may aggregate local data from traffic signals, GPS devices, other vehicles, proximity sensors and so on, and process this information locally to improve safety or navigation. More complex still are edge servers, such as those found in next-generation (5G) mobile communication networks.


The rare form of machine learning that can spot hackers who have already broken in


In cybersecurity, supervised learning works pretty well. You train a machine on the different kinds of threats your system has faced before, and it chases after them relentlessly. But there are two main problems. For one, it only works with known threats; unknown threats still sneak in under the radar. For another, supervised-learning algorithms work best with balanced data sets—in other words, ones that have an equal number of examples of what it’s looking for and what it can ignore. Cybersecurity data is highly unbalanced: there are very few examples of threatening behavior buried in an overwhelming amount of normal behavior. Fortunately, where supervised learning falters, unsupervised learning excels. The latter can look at massive amounts of unlabeled data and find the pieces that don’t follow the typical pattern. As a result, it can surface threats that a system has never seen before and needs few anomalous data points to do so.


Building a Web App With Yeoman

Released in 2012, Yeoman is an efficient open-source software system for scaffolding web applications, used for streamlining the development process. It is known primarily for its focus on scaffolding, which means the use of many different tools and interfaces coordinated for optimized project generation. GitHub hosts Yeoman. The Yeoman experience is three-tiered. Though they work together seamlessly, each part of Yeoman was developed separately and works individually. Primarily, Yeoman includes "Yo," the command line utility form used with Yeoman. This is the baseline of the Yeoman software platform. Next, Yeoman has "Grunt," and "Gulp," which are application builders to help automate your application development. Finally, the Yeoman software features "npm", which is a package manager. Package managers manage code packages for back-end and front-end development and their dependencies for you to develop your application. Yeoman provides developers with many options to combine in their development process.


Enterprise architecture still matters


Rather than checking in on how each team is operating, EAs should generally focus on the outcomes these teams have. Following the rule of team autonomy (described elsewhere in this booklet), EAs should regularly check on each team’s outcomes to determine any modifications needed to the team structures. If things are going well, whatever’s going on inside that black box must be working. Otherwise, the team might need help, or you might need to create new teams to keep the focus small enough to be effective. Most cloud native architectures use microservices, hopefully, to safely remove dependencies that can deadlock each team’s progress as they wait for a service to update. At scale, it’s worth defining how microservices work as well, for example: are they event based, how is data passed between different services, how should service failure be handled, and how are services versioned? Again, a senate of product teams can work at a small scale, but not on the galactic scale. 


Put Your BLL Monster in Chains

A very popular architecture for enterprise applications is the triplet Application, Business Logic Layer (BLL), Data Access Layer (DAL). For some reason, as time goes by, the Business Layer starts getting fatter and fatter losing its health in the process. Perhaps, I was doing it wrong. Somehow very well designed code gets old and turns into a headless monster. I ran into a couple of these monsters that I have been able to tame using FubuMVC's behaviour chains. A pattern designed for web applications that I have found useful for breaking down complex BLL objects into nice maintainable pink ponies. ... The high code quality is very important if you want a maintainable application with a long lifespan. By choosing the right design patterns and applying some techniques and best practices, any tool will work for us and produce really elegant solutions to our problems. If on the other hand, you learn just how to use the tools, you are going to end up programming for the tools and not for the ones that sign your pay-checks.



Quote for the day:


"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright


Daily Tech Digest - November 16, 2018

Microsoft now offers blockchain development kit

Microsoft now offers blockchain development kit
Microsoft has released its serverless Azure Blockchain Development Kit, which promises to extend the capabilities of earlier blockchain-based development templates. “Apps have been built for everything from democratizing supply chain financing in Nigeria to securing the food supply in the UK, but as patterns emerged across use cases, our teams identified new ways for Microsoft to help developers go farther, faster,” Marc Mercuri, Microsoft’s Block Engineering principal program manager wrote in a blog post. “The Azure Blockchain Development Kit is the next step in our journey to make developing end to end blockchain applications accessible, fast, and affordable to anyone with an idea,” he said. A serverless approach, according to Mercuri, would “reduce costs and management overhead.” Without a virtual machine (VM) server to deal with, the kit is made affordable and “within reach of every developer—from enthusiasts to ISVs [independent software vendors] to enterprises.”


8 features a cybersecurity technology platform must have
Any security researcher will tell you that at least 90% of cyber attacks emanate from phishing emails, malicious attachments, or weaponized URLs. A cybersecurity platform must apply filters and monitoring to these common threat vectors for blocking malware and providing visibility into anomalous, suspicious, and malicious behaviors. ... Cybersecurity technology platform management provides an aggregated alternative to the current situation where organizations operate endpoint security management, network security management, malware sandboxing management, etc. ... CISOs want their security technologies to block the majority of attacks with detection efficacy in excess of 95%. When attacks circumvent security controls, they want their cybersecurity technology platforms to track anomalous behaviors across the kill chain (or the MITRE ATT&CK framework), provide aggregated alerts that string together all the suspicious breadcrumbs, and provide functions to terminate processes, quarantine systems, or rollback configurations to a known trusted state.



Vaporworms: New breed of self-propagating fileless malware to emerge in 2019

self-propagating fileless malware
Fileless malware strains will exhibit wormlike properties in 2019, allowing them to self-propagate by exploiting software vulnerabilities. Fileless malware is more difficult for traditional endpoint detection to identify and block because it runs entirely in memory, without ever dropping a file onto the infected system. Combine that trend with the number of systems running unpatched software vulnerable to certain exploits and 2019 will be the year of the vaporworm. Attackers hold the Internet hostage A hacktivist collective or nation-state will launch a coordinated attack against the infrastructure of the internet in 2019. The protocol that controls the internet (BGP) operates largely on the honour system, and a 2016 DDoS attack against hosting provider Dyn showed that a single attack against a hosting provider or registrar could take down major websites. The bottom line is that the internet itself is ripe for the taking by someone with the resources to DDoS multiple critical points underpinning the internet or abuse the underlying protocols themselves.


Making sense of Microsoft's approach to AI

As Guggenheimer explains, Microsoft's idea is to let customers jump in where they are. Those on the lower end of the AI experience chain might want to begin dabbling with AI with business intelligence and apps. Microsoft's announcement this week about its plan to add AI capabilities to Power BI (as explained here by my ZDNet colleague Andrew Brust) is the cornerstone of this part of Microsoft's strategy. For customers with a little more AI experience and who are willing to do a bit more customization, Microsoft's Dynamics 365 software-as-a-service apps -- especially those which recently got their own AI boost -- provides another place for customers to get their AI feet wet, Guggenheimer suggests. The next two pieces of Microsoft's AI strategy are where there's been a lot of announcements, as of late. Microsoft is working on a number of AI "Accelerators," solution templates and analytics templates to give users a way to build on top of some repeatable patterns and practices around AI.


Why women leave tech

Why women leave tech
“Lack of career growth or trajectory is a major factor driving women to leave their jobs — this was the most common response (28 percent) when we asked why they left their last job,” writes Kim Williams, senior director of design at Indeed, in a summary of Indeed's research. “The second most-common reason for leaving was poor management, with a quarter of respondents choosing this reason. Slow salary growth came in as the third most-common reason (24 percent) respondents left their last job. By contrast, issues related to lifestyle, such as work-life balance (14 percent), culture fit (12 percent) and inadequate parental leave policies (2 percent) were less common reasons for leaving a job,” Williams says. ... As Williams writes, “Meanwhile, many women in tech believe that men have more career growth opportunities — only half (53 percent) think they have the same opportunities to enter senior leadership roles as their male counterparts. And among women who have children or other family responsibilities, almost a third (28 percent) believe they’ve been passed up for a promotion because they are a parent or have another family responsibility.”


What is the MEAN stack? JavaScript web applications

What is the MEAN stack? Next-gen web applications
In short, the MEAN stack is JavaScript from top to bottom, or back to front. A big part of MEAN’s appeal is this consistency. Life is simpler for developers because every component of the application—from the objects in the database to the client-side code—is written in the same language.  This consistency stands in contrast to the hodgepodge of LAMP, the longtime staple of web application developers. Like MEAN, LAMP is an acronym for the components used in the stack—Linux, the Apache HTTP server, MySQL, and either PHP, Perl, or Python. Each piece of the stack has little in common with any other piece.  This isn’t to say the LAMP stack is inferior. It’s still widely used, and each element in the stack still benefits from an active development community. But the conceptual consistency that MEAN provides is a boon. If you use the same language, and many of the same language concepts, at all levels of the stack, it becomes easier for a developer to master the whole stack at once.


Shift to outcomes-based security by focusing on business needs

As well as an emphasis on education, it is essential that organisations foster a culture that supports “doing the right thing”. This requires mechanisms and processes that enable concerns to be raised easily and without fear of retribution. This does not happen overnight, however, and enterprises need to allow time for it to embed fully. It is important that people throughout the organisation feel supported and confident in speaking up about any activities that may adversely affect the security design or increase the threats. This may sound obvious, but business projects have defined plans and milestone dates, and standing in the way of these to raise concerns from a secure architecture point of view is a daunting prospect. However, a supportive culture and an outcomes-focused security strategy will champion legitimate challenges, hearing and considering the claim regardless of the seniority of the individual making it. Similarly, there need to be appropriate channels for individuals to flag poor practice, without having to challenge the perpetrator directly.


Google Cloud Scheduler brings job automation to GCP

While Google encourages customers to use Cloud Scheduler for App Engine workloads on GCP, the service also works with any HTTP/S endpoint or Publish/Subscribe messaging topic. One example of the former is an on-premises enterprise application that exposes back-end data to a cloud service via HTTP/S. Publishers take many forms, such as a sensor installed at a remote oil rig. As the sensor generates various types of messages, the publish/subscribe approach sends them to a broker system, which then forwards them on to subscribers in real time. This approach can save time and effort by eliminating the maintenance of a slew of point-to-point integrations, and it makes sense for use cases such as IoT. Google offers a publish/subscribe service for GCP. Google Cloud Scheduler uses a serverless architecture, so customers only pay for job invocations as needed; pricing starts at $0.10 per job, per month, with three free jobs per month. It's difficult to compare Cloud Scheduler's cost to, for example, Azure Scheduler, which has a much more granular pricing model.


Securing the IoT has become business-critical

Securing the IoT has become business-critical
The near ubiquity of IoT does raise the security flag, as it presents a significant threat vector for hackers to breach companies. DigiCert’s goal in running the survey was to understand the state of IoT adoption, understand security implications, and quantify the benefits of having made the investments in IoT security. The survey focused on the four industry verticals where IoT was most mature — industrial, consumer products, healthcare, and transportation — and sampled companies of all sizes, with the median size being 3,000 employees. The survey asked what objective companies were trying to achieve with IoT. The top responses were operational efficiency, customer experience, increased revenue, and business agility. It’s been my experience that businesses that are early in the adoption cycle of IoT are looking to cut costs through automation, which leads to better efficiency, but they quickly pivot to customer experience as a way of creating new revenue streams.


Ahead of Black Friday, Rash of Malware Families Takes Aim at Holiday Shoppers

“The malware can intercept input data on target sites, modify online page content, and/or redirect visitors to phishing pages,” Kaspersky Lab researchers noted in a posting on Thursday, one week ahead of Thanksgiving. They added that the malicious code, once installed often lies in wait for the consumer to visit an e-commerce page, and then simply grabs the payment form wholesale. “Form-grabbing is a technique used by criminals to save all the information that a user enters into forms on a website,” the team noted. “And on an e-commerce website, such forms are almost certain to contain: login and password combination as well as payment data such as credit card number, expiration date and CVV. If there is no two-factor transaction confirmation in place, then the criminals who obtained this data can use it to steal money.” Armed with the stolen credentials, cybercriminals could hawk them on the Dark Web, or simply use the stolen accounts themselves – they can buy things from a website using victims’ credentials, and then resell the ill-gotten goods to make a nice profit – a process that comes with built-in money-laundering.



Quote for the day:


"The ultimate measure of a man is not where he stands in moments of comfort, but where he stands at times of challenge and controversy." -- Martin Luther King,


Daily Tech Digest - November 15, 2018

1 tsunami
Every technological advance can and will be exploited at some point, but if we think before we quickly push devices out into consumer’s and corporation’s hands – if we build security and privacy in to start with – we’ll have a better handle on what can go wrong. Take medical devices, for instance. Per a recent study by Trend Micro, more than 100,000 medical devices were discovered to be insecure. Think of an infusion pump precisely monitoring the flow of a lifesaving fluid into your loved one. Don’t think it can be hacked and the dosage changed? Think it doesn’t happen? The HIPAA journal recently featured a study done by Vanderbilt University that suggested healthcare data breaches cause 2,100 deaths a year. Was this IoT related? I don’t know, but the evidence of what can happen with unmanaged, unsecure IoT is powerful and must be addressed. So, where to now? Want to learn more about IoT? It really applies to everything: medicine, health, transportation, smart cities and smart homes.


How to add IoT functions to legacy equipment

vintage voltmeter gauge
The hardest part of bringing the IoT to older systems seems to be dealing with the unique, one-off characteristics of each legacy situation — often without accurate documentation. “Older equipment sometimes requires a necessary, unique design step in each individual case,” Flynn says. The key, he adds, is to avoid disrupting the existing control scheme and operations of the legacy system. “We have to be careful not to create new issues. If the legacy system uses an older communication protocol, then we have to ensure not to overload any bandwidth or processor,” he says. If that’s not possible, using new IoT sensors requires selecting the right new IoT sensors and instrumentation to solve a particular problem. That, in turn, requires a higher level of operational technology expertise. But that’s only part one, Flynn says. You still have to network into an existing IT infrastructure, often using a combination of edge devices and sensors. New Wi-Fi connections may be needed.


Elastic tackles containers and APM in the new 6.5 release

elastic.png
As Elastic adds capabilities for supporting the new forms of deployments, largely cloud-native, involving containers and serverless infrastructure, another theme of the new release is going higher up the stack and ramping up competition with, as opposed to complementing, APM vendors. The new release of Elastic APM allows users to correlate data on application performance with infrastructure logs, server metrics, and security events to identify bottlenecks. In itself, this capability overlaps those of APM vendors. APM vendors have built their IP over the years understanding how to abstract low level log readings from the standpoint of application processes making their way through IT infrastructure. A major difference form Elastic is that the APM crowd built their expertise in the walled gardens of data center deployments. By contrast, Elastic was not necessarily engineered for the cloud, but its scale-out, big data architecture made it a natural for the cloud.


Terraform orchestration matures as multi-cloud lingua franca

Terraform 0.12 makes remote state storage available free to users of the open source edition as well. Without this feature, multiple IT administrators might overwrite one another's infrastructure code or lack a single "source of truth" for infrastructure configurations. With 0.12, HashiCorp established a SaaS remote state management product for open source users that can indefinitely store an unlimited amount of state information. Terraform 0.12 also revamps the HashiCorp Configuration Language (HCL), its domain-specific language for infrastructure code, to make it more consistent and easy to use. Enterprise IT shops already favor Terraform orchestration for multi-cloud microservices management but said there was a time when ease of use was an issue. "Terraform has been instrumental for us to tame the chaos of multiple clouds and data centers," said Zack Angelo, director of platform engineering at BigCommerce, an e-commerce company based in Austin, Texas. "But in the past, if you weren't on Terraform Enterprise, migrating a state file was a pain point ..."


Global Family Business Survey 2018


The release of our ninth PwC Global Family Business Survey comes at a time of extraordinary transformation. Digital technology is disrupting whole industries; sustainability is becoming central to the conduct of business; in the corporate and financial worlds, winning trust is more important than it’s ever been; and millennials represent an enduring demographic change. After surveying nearly 3,000 family businesses across 53 territories, we were able to prove that family businesses - built around strong values and with an aspirational purpose - have a competitive advantage in disruptive times, that pay off in real terms. Therefore we believe there is an enormous opportunity for family businesses to start generating real gains from their values and purpose by adopting an active approach that turns these into their most valuable asset.


How Kubernetes is becoming a platform for AI

Xinglang Wang, a principal engineer at eBay, said AI had a high barrier to entry, but packaging tools in a Kubernetes cluster made it easier for businesses to get started on an AI project. At eBay, he said Kubernetes was used to create a unified AI platform, which enables data sharing and sharing of AI models. The AI platform also provides automation to enable eBay to train and deploy AI models.  One of the big users at the KubeCon Shanghai event was Chinese e-commerce retailer JD.com. Explaining the use of AI at JD.com, principal architect Yuan Chen described how the the company was running one of the largest Kubernetes clusters in the world. While it was traditionally used to support a microservices architecture, he said: “Everything is now driven by AI, so we have to use Kubernetes for AI. It is the right infrastructure for deep learning to train the AI models. AI scientists are expensive, so they should focus on their algorithms and not have to worry about deploying containers.”


The Linux desktop: With great success comes great failure

Missed target.
First, while the major Linux companies — Canonical, Red Hat and SUSE — all support Linux desktops, they all decided early on that the big money was to be made with servers (and nowadays with containers and the cloud). The biggest Linux players determined that the Linux desktop was a small market — and then they did very little to change that. But there’s more to it than that. The Linux desktop has also been plagued by fragmentation. There is no one Linux desktop; there are dozens, and they are not at all alike. There’s the Debian Linux family, which includes Ubuntu and Mint; the Red Hat team, with Fedora and CentOS; Arch Linux;Manjaro Linux; and numerous others. And then there are the desktop interfaces. Personally, as a dedicated Linux desktop user for decades, I love that I have a choice between GNOME, KDE Plasma, Cinnamon, Xfce, MATE, etc. for my desktop interface. But most people just find it confusing. All of that just scratches the surface.


GPS killer? Quantum 'compass' promises satellite-free navigation

quantumcompassimperialnov18.jpg
The transportable quantum accelerator could address GPS's dependence on satellite signals, which can be jammed or spoofed by an attacker, rendering the system useless for navigational information. Instead of using GPS, scientists from Imperial College London and UK laser instrument maker M Squared have demonstrated a way to measure how super-cooled atoms respond when inside an accelerating vehicle. Accelerometers are used for navigation, but as the researchers explain, they quickly lose accuracy over time unless aided by satellite signals. The satellite-free navigational device they created relies on M Squared's laser, which cools atoms in a chamber to the point where they behave in a quantum way, as both matter and waves. When a vehicle carrying the device moves, the wave properties of the cooled atoms are affected by its acceleration. A laser beam that acts as an 'optical ruler' measures how atoms move over time.


Zero-trust security not an off-the-shelf product


Zero trust is a “business enabler” because, done correctly, it enables businesses to be faster more quickly and more securely because it is a combination of processes and technologies, he said. “Security is improved because it effectively blocks lateral movement within organisations.” It is widely recognised that complexity is the enemy of security because it encourages end-users and business leaders to bypass security, said Simmonds. “The zero-trust model once again improves security by reducing complexity, and if you get it right, it works for everyone, including business partners, by providing a unified experience with greater flexibility and productivity,” he said. On the other hand, zero trust is not about trusting no one, said Simmonds, it is not a “next-generation perimeter” and it is not “VPN modernisation”. “It is not an off-the-shelf product,” he said.


Understanding the CEO’s role early in digital transformation programs

2 ceo
First, the CEO should be marketing the mission. It must be repeated to leaders and employees several times and the CEO should help answer several key questions. Why must the organization pursue the defined digital business strategy? What are the issues with the existing business model? Who are the new competitors that are disrupting existing businesses, products, and services? What markets is the organization targeting? What are the new and emerging customer needs and expectations? Why technology is critical for future success? These communications should always end with some of the short-term goals of the program and how people can participate. The CIO and others on the leadership team also be communicating and answering these questions, but the staff wants to know and see that the CEO is truly behind it and driving it. With a strategy and mission defined, their needs to be clarity on how the program is being led and how responsibilities are aligned.



Quote for the day:


"A leader must have the courage to act against an expert's advice." -- James Callaghan


Daily Tech Digest - November 14, 2018

Despite rise in security awareness, employees’ poor security habits are getting worse

poor security habits are getting worse
Efforts to get around IT may not necessarily be done with malicious intent, but the reality is they directly increase IT risk for the organization. For example, 13% of employees admitted they would not immediately notify their IT department if they thought they had been hacked. Further compounding this issue is a workforce that tends not to understand the role of all employees in keeping an organization secure, as 49% of respondents would actually blame the IT department for a cyberattack if one occurred as a result of an employee being hacked. However, it’s not just today’s employees exposing organizations to risk. As the digital transformation blurs the traditional security perimeter with cloud apps, it is also redefining the definition of a “user.” Enterprises are increasingly adopting software bots powered by robotic process automation (RPA), and granting them access to mission-critical applications and data, like their human counterparts.


GPUs are vulnerable to exploitation
A side-channel attack is one where the attacker uses how a technology operates, in this case a GPU, rather than a bug or flaw in the code. It takes advantage of how the processor is designed and exploits it in ways the designers hadn’t thought of. In this case, it exploits the user counters in the GPU, which are used for performance tracking and are available in user mode, so anyone has access to them. 3 types of GPU attacks All three attacks require the victim to download a malicious program to spy on the victim’s computer. The first attack tracks user activity on the web, since GPUs are used to render graphics in browsers. A malicious app uses OpenGL to create a spy program to infer the behavior of the browser as it uses the GPU. The spy program can reliably obtain all allocation events of each website visited to see what the user has been doing on the web and possibly extract login credentials. In the second attack, the authors extracted user passwords because the GPU is used to render the login/password box. Monitoring the memory allocation events leaked allowed for keystroke logging.


Defences based on Lockheed Martin’s cyber kill chain were mainly aimed at preventing reconnaissance, weaponising, delivery and exploitation, said Tolbert, with detection and response only required at the malware installation, callback and execution phases of the kill chain. While this is still a valid approach, he said the Mitre framework was more up to date and more realistic, with prevention mentioned only in connection with the initial access and execution phases, while detection and response is specified with regard to the eight other phases, including privilege escalation, credential theft, lateral movement and exfiltration. “These frameworks are useful in helping organisations to plan where they need to do work, and while prevention always will be important, there has been a shift in emphasis to detection and response. We believe artificial intelligence and machine learning [ML] can help in making this shift,” said Tolbert. 


Managing Change in the Face of Skepticism

To be sure, skepticism can play a positive role in organizations. It helps companies make better decisions. It can build tight trading algorithms and sturdy satellites. Doubting hockey-stick growth and suspicious data helps companies make better capital allocation choices. When skepticism is constructive, it can be leveraged to evaluate a change effort’s benefits, and build employee enthusiasm for it. Cynicism is a different matter. It often stems from a history of failed programs or lack of management credibility. Cynicism breeds distrust and pessimism, so if it is present, transformation efforts must restore credibility before moving to the next step of the plan. Regardless of an organization’s culture, business leaders should avoid strong-arming change — recent failed transformation efforts have shown the pitfalls of that approach. When leading a transformation in a skeptical culture, look within and leverage the skepticism to move forward.


Why the Artificial Intelligence Era Requires New Approaches to Create AI Talent


With the horizons dictating artificial technology expected to broaden in the future, one can tell that AI talent will have an imperative role to play in company performance. The talent that a company possesses in the field of AI dictates how well they’re able to manage the analytics for the future. The best AI talent in the market will realize the performance of different models and harness their potential to help them perform at their full potential. This knowledge is what AI companies will crave in the future. As the age of AI kicks in, the management philosophy will also change. While previously managements were involved in routine decision making and innovations, the AI age will define how organizations now rely more on their top talent to define and lead innovation. The innovation that the workforce inside an organization brings would be the differentiating factor for all forms of AI companies. Their workers would help propel them forward and foster innovation for them.


Mastering data governance initiatives in the age of IIoT

Most IIoT gadgets both send and receive information about processes that occur within the scope of the businesses that use them. However, concerning distributed data, companies must ensure it doesn't reveal information to recipients that could highlight trade secrets. For example, many IIoT sensors track various actions that happen in assembly lines. If recipients can extract details from information that tells them how companies go about making their products and what helps them stand out, businesses will discover their operations are not sufficiently locked down from outside parties. While some of those entities might not seek gain from the information, others may try to mimic certain practices. When that happens, the increased competitiveness mentioned above becomes less prominent and may no longer be relevant at all. However, keeping sensitive information secret is not straightforward. That's because it takes substantial forethought to figure out how to spend money on IIoT equipment that works seamlessly together.


How to Prevent Data Leaks in a Collaborative World 


MyWorkDrive is a secure data access and collaboration solution. It does not require the organization to copy all its data to a cloud provider, and it does not require users to access data via VPN. Data is shared and collaborated on in place. The customer installs the MyWorkDrive server software on a server within their environment. IT then points the MyWorkDrive server to the existing fileserver mount points to which it wants to allow access. The solution moves no data in the process. Users can directly access data through the MyWorkDrive WebClient, native desktop client, iOS or Android app. The software initially presents shared files in a browser window. MyWorkDrive administrators designate the actions users can take on those shared files. The administrator can remove the ability to download the file, to copy data to the clipboard and to take screenshots. The administrator can also watermark the files which should discourage a user from taking a picture of the screen with a smartphone.


Why cryptojacking malware is a bigger threat to your PC than you realise

That's because cryptocurrency miners give attackers a foothold into PCs which can be exploited to deliver more damaging malware in future, security firm Fortinet has warned in its latest threat landscape report - noting that underestimating cryptojacking places organisations under heightened risk. "What we're finding out is that this particular malware also has other nefarious activities that it does while it's mining for cryptocurrency," Anthony Giandomenico, senior security researcher at Fortinet's FortiGuard Labs told ZDNet. "It will disable your antivirus, open up different ports to reach out to command and control infrastructure, it can download other malware. Basically, it's reducing or limiting your security shields, opening you up to lots more different types of attacks". A number of examples of cryptocurrency miners packing an additional punch have already been spotted in the wild: PowerGhost alters how systems perform scans and updates, while also disabling Windows Defender.


Did IBM overhype Watson Health's AI promise?

ibm watson health
While IBM faces declining revenue overall, and its recently released third-quarter earnings showed revenue from cognitive offerings was down 6% from last year, Watson Health saw growth, according to Barbini. He noted that IBM does not release numbers specific to Watson Health for "competitive reasons." Barbini admitted that developing Watson Health and, specifically, Watson for Oncology is not an easy task, but it remains an important one. "That's why IBM dove into it three years ago. Did you really think oncology would be mastered in three years?" Barbini said. "However, let's look at the facts. More than 230 hospitals are using one of our oncology tools. We've had 11 [software] updates over last year and half and we've doubled the number of patients we've reached to over 100,000 as of the end of the third quarter of this year." Earlier this month, the head of Watson Health for the past three years, Deborah DiSanzo, stepped down and Kelly took over. DiSanzo is continuing to work with IBM Cognitive Solutions' strategy team, according to a company spokesperson.


Cisco fuses SD-WAN, security and cloud services

city / traffic / street / light trails / speed / progress / Wan Chai, Hong Kong
What Cisco is doing is adding support for its Umbrella security system to its SD-WAN software which runs on top of the IOS XE operating system that powers its core branch, campus and enterprise routers and switches. Cisco describes Umbrella as a cloud-delivered secure internet gateway, that stops current and emergent threats over all ports and protocols. It blocks access to malicious domains, URLs, IPs, and files before a connection is ever established or a file downloaded. It basically protects customers and communications at the Domain Name Server (DNS) layer.  Umbrella’s key features come from OpenDNS which Cisco bought for $635 million in 2015. OpenDNS offers a cloud service that prevents customers from connecting to dangerous internet IP addresses such as those known to be associated with criminal activity, botnets, and malicious downloads. “Umbrella blocks access to malicious destinations before a connection is ever established, and it is backed by the threat intelligence of Cisco Talos,” Prabagaran said. 



Quote for the day:


"If you find a path with no obstacles, it probably doesn't lead anywhere." -- Frank A Clark


Daily Tech Digest - November 13, 2018

Colmena, an Architecture for Highly-Scalable Web Services


Cells are self-contained services that follow the hexagonal architecture. Each cell: Has a clear purpose and responsibility; Has some internal domain that represents, validates and transforms the cell’s state; Relies on a series of interfaces to receive input from (and send output to) external services and technologies; Exposes a public API (a contract stating what it can do). You can think of cells as very small microservices. In fact, we encourage you to try to make your cells as small as possible. In our experience, granulating your domain around entities and relationships helps you understand, test and maintain the codebase in the long run. In Colmena, changes to the domain are represented as a sequence of events. This sequence of events is append-only, as events are immutable (they are facts that have already taken place). In event sourcing, this sequence is called a “Source of truth”, and it provides: An audit log of all the actions that have modified the domain; The ability for other components (in the same or different cells) to listen to certain events and react to them.


How Millennials Should View the World of Data Science


So to summarize, here is what I feel MBA students (and business leaders) need to understand about the growing capabilities and power of Data Science: Data Science is a team sport that equally includes data engineers (who gather and prepare and enrich the data for advanced analytics), data scientists (who build analytic models that codify cause and effect and measure goodness of fit”), and business stakeholders; Embrace the “Thinking Like A Data Scientist” approach in order to determine what problems to target with data science and how to apply the resulting customer, product and operational insights to derive and drive business value; Understand how to collaborate with the data science team around the Hypothesis Development Canvas that cements the relationship between the organization’s business strategy and specific AI and Machine Learning efforts; and Gain a high-level understanding of “what” advanced analytic capabilities, such as deep learning, machine learning and reinforcement learning, can do in uncovering customer, product and operational insights buried in the organization’s data


Internet Explorer scripting engine becomes North Korean APT's favorite target in 2018

Microsoft became well aware of this component's security flaws many years ago. That's why, in July 2017, Microsoft announced that it was disabling the automatic execution of VBScript code in the latest IE version that was included with the Windows 10 Fall Creators Update, released in the fall of last year. That change meant that hackers couldn't use VBScript code to attack users via Internet Explorer in Windows 10. Microsoft also promised patches to disable VBScript code execution in IE versions on older Windows releases. That change stopped many cybercrime operations, but DarkHotel seems to have adapted to Microsoft's recent VBScript deprecation announcement. According to reports, DarkHotel apparently opted to use VBScript exploits embedded inside Office documents and did not target Internet Explorer users via the browser directly.


AMD continues server push, introduces Zen 2 architecture
As part of the news conference, AMD acknowledged that Zen 4 is “in design,” meaning still on paper. Given Zen 3 is due in 2020, don’t figure on seeing Zen 4 until 2022 or so. Beyond that, the company said only it would offer higher performance and performance per watt when compared to prior generations. It’s been a good few weeks for AMD and EPYC. Last week, Oracle announced it would offer bare-metal instances on Epyc, and today Amazon Web Services (AWS) announced that Amazon Elastic Compute Cloud (EC2) will use Epyc CPUs, as well, so customers can get access today to instances running on the AMD processors. Intel noted that it, too, has an extensive relationship with AWS. So, now AMD has license deals with all of the major server vendors (HPE, Dell, Lenovo, Cisco) and almost all of the major cloud vendors. It had previously announced deals with Microsoft Azure and China’s Baidu and Tencent.


A foundational strategy pattern for analysis: MECE

Architecture
MECE, pronounced "mee-see," is a tool created by the leading business strategy firm McKinsey. It stands for "mutually exclusive, collectively exhaustive," and dictates the relation of the content, but not the format, of your lists. Because of the vital importance of lists, this is one of the most useful tools you can have in your tool box. The single most important thing you can do to improve your chances of making a winning technology is to become quite good at making lists. Lists are the raw material of strategy and technology architecture. They are the building blocks, the lifeblood. They are the foundation of your strategy work. And they are everywhere. Therefore, if they are weak, your strategy will crumble. You can be a strong technologist, have a good idea, and care about it passionately. But if you aren’t practically perfect at list-making, your strategy will flounder and your efforts will fail. That’s because everything you do as you create your technology strategy starts its life as a list, and then blossoms into something else.


Many firms need more evidence of full benefits of artificial intelligence

Much of executives’ enthusiasm is justified. AI is already being deployed in a range of arenas, from digital assistants and self-driving cars to predictive analytics software providing early detection of diseases or recommending consumer goods based on shopping habits. A recent Gartner study finds that AI will generate $1.2 trillion in business value in 2018—a striking 70 percent increase over last year. According to Gartner, the number could swell to close to $4 trillion by 2022. This dramatic growth is likely reinforcing the perception among executives that such technologies can transform their respective industries. When looking at the external environment, encompassing economic, political, social, and other external developments that affect business, one-third of executives flagged positive technological disruption in their industry as a top opportunity.


Cylance researchers discover powerful new nation-state APT

group of hackers in digital environment
The malware didn't just evade antivirus detection, however, it let itself be discovered by different antivirus vendors on preprogrammed dates, likely as a distraction tactic. "What we've got here in this case is a threat actor who has figured out how to determine what antivirus is running on your system and deliberately trigger it in an attempt to distract you," Josh Lemos, vice president of research and intelligence at Cylance, says. "That should be concerning organizations outside of Pakistan." Kill switches in malware have been seen before, such as in Stuxnet, but Cylance researchers say they've rarely seen a campaign that deliberately surrenders itself to investigators in this manner. "The White Company...wanted the alarm to sound," their report concluded. "This diversion was likely to draw the target's (or investigator's) attention, time and resources to a different part of the network. Meanwhile, the White Company was free to move into another area of the network and create new problems."


Firms lack responsible exec for cyber security

According to the report, although more people see the need for regular boardroom discussions about security, their organisations are failing to raise it sufficiently at the C-suite level. While 80% of all survey respondents agree that preventing a security attack should be a regular boardroom agenda item (up from 73% a year ago) only 61% say that it already is, which represents an increase of just 5% on last year. The report also suggests this lack of cohesion at the top of the organisation means that many are struggling to secure their most important digital assets. Fewer than half (48%) of respondents globally – 53% in the UK – say they have fully secured all of their critical data. But with the General Data Protection Regulation (GDPR) now fully in effect, this is no longer an opportunity, but mandatory, the report notes. However, companies are beginning to take control of their data as cloud computing best practices mature, with 27% reporting that the majority of their organisation’s data is currently stored on premise or in datacentres (25%).


Avoiding Business Stasis by Modernizing Ops, Architecture & More


Fear is inevitable during any modernization growth spurt. For instance, the operations team may fear that an increase in automation will lead to the loss of human expertise. Re-architecting the software may be perceived by developers as a threat to well-defined traditional team scopes and organizations. For the business owner, a poorly executed modernization takes away resources and doesn’t lead to improved agility. The concern many folks voice when they don’t know how to run or create a platform is that they don’t know what their place will be in the new organization. But what has started to become clear to those participating in our modernization effort is that their skills are being expanded — not replaced. And that enables them to take on new roles in the organization. One of the fundamental things that’s happening at StubHub is a complete change in the way we think about new ideas. The change in our stack allows us to work in any language and because we fully expect to move beyond Java and get into Go and Ruby and node.js, we can innovate and rethink our future in more ways than ever before.


C language update puts backward compatibility first

C language update puts backward compatibility first
C is the foundation for many popular software projects such as the Linux kernel and it remains a widely used language, currently second in the Tiobe index. Its simplicity makes it a common choice for software applications that run at or close to bare metal, but developers must take extra care in C, versus higher-level languages like Python, to ensure that memory is managed correctly—easily the most common problem found in C programs. Previous revisions to the C standard added features to help with memory management—including the “Annex K” bounds-checking feature. However, one of the proposals on the table for C2x is to deprecate or remove the Annex K APIs, because their in-the-field implementations are largely incomplete, non-conformant, and non-portable. Alternative proposals include replacing these APIs with third-party bounds-checking systems like Valgrind or the Intel Pointer Checker, introducing refinements to the memory model, or adding new ways to perform bounds checking for memory objects.



Quote for the day:


"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson