Daily Tech Digest - November 18, 2018


“Before integrating any new technologies into American life, we must be absolutely sure that those innovations are imbued with our values,” Democratic Sen. Edward Markey, who sent a letter to Amazon CEO Jeff Bezos expressing his concern about the company's facial recognition services, told BuzzFeed News. “I am not convinced Rekognition passes that test.” By contrast, decision-makers from Orlando seem prepared to go full steam ahead with tests of Amazon’s technology, though emails between city officials and Amazon reveal there were setbacks. Sgt. Eduardo Bernal, a public information officer for the city’s police department, told BuzzFeed News that Amazon provided no hands-on training on Rekognition, just standard documentation. Test results were flawed. There were miscommunications, including an embarrassing misstep that required an apology from Amazon — to the public and to Orlando PD.


Alphabet stops its project to create a glucose-measuring contact lens for diabetes patients

Google smart contact lens to measure glucose levels in tears.
"Our clinical work on the glucose-sensing lens demonstrated that there was insufficient consistency in our measurements of the correlation between tear glucose and blood glucose concentrations to support the requirements of a medical device," the company said. Verily made a big splash when if first launched the program in 2014, while it was still known as Google Life Sciences. The company partnered with Alcon, Novartis' eye-care division, on the project. However, it's been quiet about the project in the past few years, leading to speculation that it was winding down. Verily said it did have some success with the experiment in a controlled environment, but not in actual tests because of the dynamic environment of the eye. It's a problem that goes beyond Verily. Billions of dollars have been spent on research and development, but companies across both technology and life sciences have struggled. There's even a book dedicated to documenting these failures titled "the pursuit of noninvasive glucose: hunting the deceitful turkey."


Is the Ransomware Scare Over?

The primary emphasis in ransomware preparation, other than user education and perimeter defense, is backups. In response to ransomware, IT needs to protect all data more frequently including file servers and endpoints. To some extent, backing up all data is data protection 101, but in our experience, most organizations, except for critical applications, back up most of their data once per night. Ransomware makes once per night backups obsolete. While the public announcement of ransomware attacks may be down, the “creativity” of these attacks is on the rise. According to Proofpoint, the number of ransomware variants is up 30X. The variations make it harder for perimeter defense solutions to detect them. Some of the variants specifically attack components of the data protection process like protected data stores and backup configuration files. Also, some malware strains now sit idle, instead of immediately executing their encryption attack. This ensures that the malware file is backed up repeatedly by the data protection process.


Why tech-enabled go-to-market innovation is critical for industrial companies

Why tech-enabled go-to-market innovation is critical for industrial companies--and what to do about it
While most industrial companies have come to terms with the need to make more strategic use of technology,1 they are often unsure of how to proceed or are focused on the wrong initiatives, resulting in halting action and a failure to build significant value. On the other hand, those companies that move quickly and decisively to transform their go-to-market channels, models, and culture through technology should be able to unlock substantial value: top quartile B2B players generate 3.5 percent more revenue and are 15 percent more profitable than the rest of the B2B field. Our detailed analysis has identified a pool of $74 billion to $298 billion in revenue growth that could be tapped through enabling technology in sales (Exhibit 2). The value comes primarily through new customer experiences, refined pricing, and enhanced selling processes. ... Our experience in working with dozens of industrial companies has helped to identify where the main source of value is across the four main steps of the selling process: the presales stage, the sales process, the transaction itself, and IoT-enabled selling


Big banks are not feeling the FinTech heat (yet)

It’s the push-pull syndrome. FinTech apps push a lot of information to me because they’re intelligent; big bank apps force me to pull the information because they’re dumb. FinTech apps can predict and present my financial lifestyle to me intelligently; big bank apps show me what I’ve spent in a traditional debit and credit ledger that has no insight at all. Or that’s my experience of two of the most frequently used big bank apps. They’re pretty dumb. Meantime, my experience of some of the most popular FinTech apps is the opposite. ... Top of the fintechs is established payments unicorn TransferWise, with just 0.5 per cent of the visitor share in the most recent week. Revolut, which recently announced it had signed up 1 million UK users, has just 0.3 percent of the market share, while Starling Bank has 0.2 per cent. Traditional banks even dominate the new downloads list, though Starling manages to sneak into the top 10, with 4.6 per cent of downloads in the most recent week.


How Do HIPAA Regulations Apply to Wearable Devices?

HIPAA regulations could potentially apply to new technologies used by covered entities and business associates.
Wearable devices and how HIPAA regulations potentially apply is a very difficult issue, Spencer said. “There is a lot of ambiguity about exactly where HIPAA is triggered and where it's not,” she stated. “The only real clarity is where a company that offers a wearable, or a mobile app that collects health information, where that arrangement is just directly between the device maker and the individual. Or it’s between the app maker and the individual, and there's no covered entity or business associate involved. Then there's no application of HIPAA, that's clear.” HIPAA regulations only apply to covered entities and business associates, Spencer reiterated. This includes health plans, healthcare clearinghouses and certain healthcare providers that engage in certain payment and other financial transactions. Business associates are those organizations that specifically have access to health information to provide a service or perform a function on behalf of a covered entity, she noted.


Are We Nearing The End Of Hadoop And Big Data?

So, it’s no longer just Hadoop. Cloudera Chief Executive, Tom Reilly, admitted as much, in his comments after the merger: “Hadoop has evolved so drastically that we don’t even mention it anymore.” This analysis provides an overview of the different options available to enterprises instead of using Hadoop. And you have to wonder, if this trend continues, what the future will be for the technology. As the author writes, “The center of gravity has moved elsewhere.” What this development represents is how big data is now becoming just data. Every organization, large and small, now has access to an unparalleled quantity and quality (and more current/real-time) data than at any time in history. They have more technological options to build services using this data — and this is important because different use cases (using different types of data) mean it’s possible to choose the right technology for what you need. For example, there are numerous open-source options, as well as proprietary machine learning platforms. Many of these make the 10-year-old Hadoop technology look dated.


In bigger crackdown of crypto abuses, SEC goes after unregistered coin offerings

The U.S. Securities and Exchange Commission in Washington, D.C.
The settlement comes a week after the agency notched another "first," setting charges that a crypto firm called EtherDelta was operating as an unregistered exchange. The cases underscore the SEC's insistence that the relatively new digital financial products must follow traditional securities rules. "We have made it clear that companies that issue securities through ICOs are required to comply with existing statutes and rules governing the registration of securities," Stephanie Avakian, the SEC's co-director of enforcement, said in a statement. "These cases tell those who are considering taking similar actions that we continue to be on the lookout for violations of the federal securities laws with respect to digital assets." On Thursday, federal prosecutors in New York announced a guilty plea by a man who defrauded investors with two cryptocurrencies he founded during the initial coin offering boom.


All Roads Lead to Liquidation: Crypto Companies Cash in Big

All Roads Lead to Liquidation: Crypto Companies Cash in Big
The rising trend of acquisition could be the result of simple, sudden opportunity. Of the Bitstamp acquisition, CEO Nejc Kodrič, said that “the sale wasn’t planned. There was no active effort to go around and solicit buyers. The vibrant industry last year sparked potential interest from buyers to make a footprint in the industry. We started to get approached by buyers in the middle of last year.” Indeed, acquisition is a swift, simple way for a company’s owners to profit while maintaining some control over the company’s operations. Kodrič still holds a 10 percent stake in the company; Damian Merlak, his co-founder, sold all of his 30 percent stake. Generally speaking, “the benefits of [acqisition] include receiving valuable intellectual property and the talented employees of the acquired company – those are precious resources that can help companies grow quickly. Communities and a new user-base are also precious resources the acquirer gets after the deal,” explained Ruslan Gavrilyuk Co-Founder, President of Kepler Finance.


Spark Application Performance Monitoring using Uber JVM Profiler, InfluxDB and Grafana

Apache Spark provides a web-ui and REST API for metrics. Spark also provides a variety of sinks including Consoles, JMX, Servlet, Graphite etc. There are few other open source performance monitoring tools available like dr-elephant, sparklint, prometheus, etc. Metrics provided by these tools are mostly server level metrics, and few of them also provide information of running applications. Uber JVM Profiler collects both server level and application code metrics. This profiler can collect all metrics (cpu, memory, buffer-pool etc) from the driver, executor or any JVM. It can instrument existing code without modifying it, so it can collect metrics about methods, arguments and execution time. For storing metrics for timeseries analysis, we will use InfluxDB, which is a powerful timeseries database. We will extend Uber JVM Profiler and add a new reporter for InfluxDB so metrics data can be stored using HTTP API. For the dashboard of graphs and charts we will use Grafana, which will query the InfluxDB for metrics data.



Quote for the day:


"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford


Daily Tech Digest - November 17, 2018

newyorkdeepmasterprints.jpg
The researchers from New York University detail in a new paper how they used a neural network to create 'DeepMasterPrints', or realistic synthetic fingerprints that have the same ridges visible when rolling an ink-covered fingertip on paper. The attack is designed to exploit systems that match only a portion of the fingerprint, like the readers used to control access to many smartphones. The aim is to generate fingerprint-like images that match multiple identities to spoof one identity in a single attempt. DeepMasterPrints are an improvement on the MasterPrints the researchers developed last year, which relied on modifying details from already captured fingerprint images used by a fingerprint scanner for matching purposes. The previous method was able to mimic the images stored in the file, but couldn't create a realistic fingerprint image from scratch. The researchers tested DeepMasterPrints against the NIST's ink-captured fingerprint dataset and another dataset captured from sensors.


The strategy of treating containers as logically identical units that can be replaced, spun up, and moved around without much thought works really well for stateless services but is the opposite of how you want to manage distributed stateful services and databases. First, stateful instances are not trivially replaceable since each one has its own state which needs to be taken into account. Second, deployment of stateful replicas often requires coordination among replicas—things like bootstrap dependency order, version upgrades, schema changes, and more. Third, replication takes time, and the machines which the replication is done from will be under a heavier load than usual, so if you spin up a new replica under load, you may actually bring down the entire database or service. One way around this problem—which has its own problems—is to delegate the state management to a cloud service or database outside of your Kubernetes cluster. That said, if we want to manage all of your infrastructure in a uniform fashion using Kubernetes then what do we do?


A data lake is where vast amounts of raw data or data in its native format is stored, unlike a data warehouse which stores data in files or folders (a hierarchical structure). Data lakes provide unlimited space to store data, unrestricted file size and a number of different ways to access data, as well as providing the tools necessary for analysing, querying and processing. In a data lake each data item is assigned with a unique identifier and metadata tags. In this way the data lake can be queried for relevant data and that smaller set of relevant data can be analysed. Also, data can also be stored in data lakes before being curated and moved to a data warehouse. ... The Azure Data Lake is a Hadoop File System (HDFS) and enables Microsoft services such as Azure HDInsight, Revolution-R Enterprise, industry Hadoop distributions like Hortonworks and Cloudera all to connect to it. Azure Data Lake has all Azure Active Directory features including Multi-Factor Authentication, conditional access, role-based access control, application usage monitoring, security monitoring and alerting.


Harvard researchers want to school Congress about AI

Funded by HKS’s Shorenstein Center on Media, Politics, and Public Policy, the initiative will focus on expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology. The hope is that with these combined efforts, Congress and other policymakers will be better equipped to effectively regulate and shepherd the growing impact of AI on society. Over the past year, a series of high-profile tech scandals have made increasingly clear the consequences of poorly implemented AI. This includes the use of machine learning to spread disinformation through social media and the automation of biased and discriminatory practices through facial recognition and other automated systems. In October, at the annual AI Now Symposium, technologists, human rights activists, and legal experts repeatedly emphasized the need for systems to hold AI accountable.  “The government has the long view,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund.


Role of digitisation and technologies like AI & ML in digital transformation of SMEs?


More specifically, AI-based solutions like automation can be greatly beneficial to SMEs in reducing several processes like sales planning, managing finances and supply chain, marketing, etc. These processes which most SMEs still conduct through offline methods considerably reduce the efficiency of the enterprise, since the managers’ focus is largely on the operations, rather than on serving customers and retaining them. Simultaneously, digitised business management and enterprise mobility solutions can enable SMEs to expand their business to any region within the country or outside, without having to worry about the infrastructural and monetary challenges associated. Customised, enterprise-centric solutions with AI and Machine Learning Every organisation faces a different set of issues and challenges. The solutions, then, to effectively tackle these challenges should also be specific to the business segment, as well as the industry, which the enterprise is involved in.


What Edge Computing Means for Infrastructure and OperationsLeaders

Edge computing solutions can take many forms. They can be mobile in a vehicle or smartphone, for example. Alternatively, they can be static — such as when part of a building management solution, manufacturing plant or offshore oil rig. Or they can be a mixture of the two, such as in hospitals or other medical settings. The capabilities of edge computing solutions range from basic event filtering to complex-event processing or batch processing. “A wearable health monitor is an example of a basic edge solution. It can locally analyze data like heart rate or sleep patterns and provide recommendations without a frequent need to connect to the cloud,” says Rao. More complex edge computing solutions can act as gateways. In a vehicle, for example, an edge solution may aggregate local data from traffic signals, GPS devices, other vehicles, proximity sensors and so on, and process this information locally to improve safety or navigation. More complex still are edge servers, such as those found in next-generation (5G) mobile communication networks.


The rare form of machine learning that can spot hackers who have already broken in


In cybersecurity, supervised learning works pretty well. You train a machine on the different kinds of threats your system has faced before, and it chases after them relentlessly. But there are two main problems. For one, it only works with known threats; unknown threats still sneak in under the radar. For another, supervised-learning algorithms work best with balanced data sets—in other words, ones that have an equal number of examples of what it’s looking for and what it can ignore. Cybersecurity data is highly unbalanced: there are very few examples of threatening behavior buried in an overwhelming amount of normal behavior. Fortunately, where supervised learning falters, unsupervised learning excels. The latter can look at massive amounts of unlabeled data and find the pieces that don’t follow the typical pattern. As a result, it can surface threats that a system has never seen before and needs few anomalous data points to do so.


Building a Web App With Yeoman

Released in 2012, Yeoman is an efficient open-source software system for scaffolding web applications, used for streamlining the development process. It is known primarily for its focus on scaffolding, which means the use of many different tools and interfaces coordinated for optimized project generation. GitHub hosts Yeoman. The Yeoman experience is three-tiered. Though they work together seamlessly, each part of Yeoman was developed separately and works individually. Primarily, Yeoman includes "Yo," the command line utility form used with Yeoman. This is the baseline of the Yeoman software platform. Next, Yeoman has "Grunt," and "Gulp," which are application builders to help automate your application development. Finally, the Yeoman software features "npm", which is a package manager. Package managers manage code packages for back-end and front-end development and their dependencies for you to develop your application. Yeoman provides developers with many options to combine in their development process.


Enterprise architecture still matters


Rather than checking in on how each team is operating, EAs should generally focus on the outcomes these teams have. Following the rule of team autonomy (described elsewhere in this booklet), EAs should regularly check on each team’s outcomes to determine any modifications needed to the team structures. If things are going well, whatever’s going on inside that black box must be working. Otherwise, the team might need help, or you might need to create new teams to keep the focus small enough to be effective. Most cloud native architectures use microservices, hopefully, to safely remove dependencies that can deadlock each team’s progress as they wait for a service to update. At scale, it’s worth defining how microservices work as well, for example: are they event based, how is data passed between different services, how should service failure be handled, and how are services versioned? Again, a senate of product teams can work at a small scale, but not on the galactic scale. 


Put Your BLL Monster in Chains

A very popular architecture for enterprise applications is the triplet Application, Business Logic Layer (BLL), Data Access Layer (DAL). For some reason, as time goes by, the Business Layer starts getting fatter and fatter losing its health in the process. Perhaps, I was doing it wrong. Somehow very well designed code gets old and turns into a headless monster. I ran into a couple of these monsters that I have been able to tame using FubuMVC's behaviour chains. A pattern designed for web applications that I have found useful for breaking down complex BLL objects into nice maintainable pink ponies. ... The high code quality is very important if you want a maintainable application with a long lifespan. By choosing the right design patterns and applying some techniques and best practices, any tool will work for us and produce really elegant solutions to our problems. If on the other hand, you learn just how to use the tools, you are going to end up programming for the tools and not for the ones that sign your pay-checks.



Quote for the day:


"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright


Daily Tech Digest - November 16, 2018

Microsoft now offers blockchain development kit

Microsoft now offers blockchain development kit
Microsoft has released its serverless Azure Blockchain Development Kit, which promises to extend the capabilities of earlier blockchain-based development templates. “Apps have been built for everything from democratizing supply chain financing in Nigeria to securing the food supply in the UK, but as patterns emerged across use cases, our teams identified new ways for Microsoft to help developers go farther, faster,” Marc Mercuri, Microsoft’s Block Engineering principal program manager wrote in a blog post. “The Azure Blockchain Development Kit is the next step in our journey to make developing end to end blockchain applications accessible, fast, and affordable to anyone with an idea,” he said. A serverless approach, according to Mercuri, would “reduce costs and management overhead.” Without a virtual machine (VM) server to deal with, the kit is made affordable and “within reach of every developer—from enthusiasts to ISVs [independent software vendors] to enterprises.”


8 features a cybersecurity technology platform must have
Any security researcher will tell you that at least 90% of cyber attacks emanate from phishing emails, malicious attachments, or weaponized URLs. A cybersecurity platform must apply filters and monitoring to these common threat vectors for blocking malware and providing visibility into anomalous, suspicious, and malicious behaviors. ... Cybersecurity technology platform management provides an aggregated alternative to the current situation where organizations operate endpoint security management, network security management, malware sandboxing management, etc. ... CISOs want their security technologies to block the majority of attacks with detection efficacy in excess of 95%. When attacks circumvent security controls, they want their cybersecurity technology platforms to track anomalous behaviors across the kill chain (or the MITRE ATT&CK framework), provide aggregated alerts that string together all the suspicious breadcrumbs, and provide functions to terminate processes, quarantine systems, or rollback configurations to a known trusted state.



Vaporworms: New breed of self-propagating fileless malware to emerge in 2019

self-propagating fileless malware
Fileless malware strains will exhibit wormlike properties in 2019, allowing them to self-propagate by exploiting software vulnerabilities. Fileless malware is more difficult for traditional endpoint detection to identify and block because it runs entirely in memory, without ever dropping a file onto the infected system. Combine that trend with the number of systems running unpatched software vulnerable to certain exploits and 2019 will be the year of the vaporworm. Attackers hold the Internet hostage A hacktivist collective or nation-state will launch a coordinated attack against the infrastructure of the internet in 2019. The protocol that controls the internet (BGP) operates largely on the honour system, and a 2016 DDoS attack against hosting provider Dyn showed that a single attack against a hosting provider or registrar could take down major websites. The bottom line is that the internet itself is ripe for the taking by someone with the resources to DDoS multiple critical points underpinning the internet or abuse the underlying protocols themselves.


Making sense of Microsoft's approach to AI

As Guggenheimer explains, Microsoft's idea is to let customers jump in where they are. Those on the lower end of the AI experience chain might want to begin dabbling with AI with business intelligence and apps. Microsoft's announcement this week about its plan to add AI capabilities to Power BI (as explained here by my ZDNet colleague Andrew Brust) is the cornerstone of this part of Microsoft's strategy. For customers with a little more AI experience and who are willing to do a bit more customization, Microsoft's Dynamics 365 software-as-a-service apps -- especially those which recently got their own AI boost -- provides another place for customers to get their AI feet wet, Guggenheimer suggests. The next two pieces of Microsoft's AI strategy are where there's been a lot of announcements, as of late. Microsoft is working on a number of AI "Accelerators," solution templates and analytics templates to give users a way to build on top of some repeatable patterns and practices around AI.


Why women leave tech

Why women leave tech
“Lack of career growth or trajectory is a major factor driving women to leave their jobs — this was the most common response (28 percent) when we asked why they left their last job,” writes Kim Williams, senior director of design at Indeed, in a summary of Indeed's research. “The second most-common reason for leaving was poor management, with a quarter of respondents choosing this reason. Slow salary growth came in as the third most-common reason (24 percent) respondents left their last job. By contrast, issues related to lifestyle, such as work-life balance (14 percent), culture fit (12 percent) and inadequate parental leave policies (2 percent) were less common reasons for leaving a job,” Williams says. ... As Williams writes, “Meanwhile, many women in tech believe that men have more career growth opportunities — only half (53 percent) think they have the same opportunities to enter senior leadership roles as their male counterparts. And among women who have children or other family responsibilities, almost a third (28 percent) believe they’ve been passed up for a promotion because they are a parent or have another family responsibility.”


What is the MEAN stack? JavaScript web applications

What is the MEAN stack? Next-gen web applications
In short, the MEAN stack is JavaScript from top to bottom, or back to front. A big part of MEAN’s appeal is this consistency. Life is simpler for developers because every component of the application—from the objects in the database to the client-side code—is written in the same language.  This consistency stands in contrast to the hodgepodge of LAMP, the longtime staple of web application developers. Like MEAN, LAMP is an acronym for the components used in the stack—Linux, the Apache HTTP server, MySQL, and either PHP, Perl, or Python. Each piece of the stack has little in common with any other piece.  This isn’t to say the LAMP stack is inferior. It’s still widely used, and each element in the stack still benefits from an active development community. But the conceptual consistency that MEAN provides is a boon. If you use the same language, and many of the same language concepts, at all levels of the stack, it becomes easier for a developer to master the whole stack at once.


Shift to outcomes-based security by focusing on business needs

As well as an emphasis on education, it is essential that organisations foster a culture that supports “doing the right thing”. This requires mechanisms and processes that enable concerns to be raised easily and without fear of retribution. This does not happen overnight, however, and enterprises need to allow time for it to embed fully. It is important that people throughout the organisation feel supported and confident in speaking up about any activities that may adversely affect the security design or increase the threats. This may sound obvious, but business projects have defined plans and milestone dates, and standing in the way of these to raise concerns from a secure architecture point of view is a daunting prospect. However, a supportive culture and an outcomes-focused security strategy will champion legitimate challenges, hearing and considering the claim regardless of the seniority of the individual making it. Similarly, there need to be appropriate channels for individuals to flag poor practice, without having to challenge the perpetrator directly.


Google Cloud Scheduler brings job automation to GCP

While Google encourages customers to use Cloud Scheduler for App Engine workloads on GCP, the service also works with any HTTP/S endpoint or Publish/Subscribe messaging topic. One example of the former is an on-premises enterprise application that exposes back-end data to a cloud service via HTTP/S. Publishers take many forms, such as a sensor installed at a remote oil rig. As the sensor generates various types of messages, the publish/subscribe approach sends them to a broker system, which then forwards them on to subscribers in real time. This approach can save time and effort by eliminating the maintenance of a slew of point-to-point integrations, and it makes sense for use cases such as IoT. Google offers a publish/subscribe service for GCP. Google Cloud Scheduler uses a serverless architecture, so customers only pay for job invocations as needed; pricing starts at $0.10 per job, per month, with three free jobs per month. It's difficult to compare Cloud Scheduler's cost to, for example, Azure Scheduler, which has a much more granular pricing model.


Securing the IoT has become business-critical

Securing the IoT has become business-critical
The near ubiquity of IoT does raise the security flag, as it presents a significant threat vector for hackers to breach companies. DigiCert’s goal in running the survey was to understand the state of IoT adoption, understand security implications, and quantify the benefits of having made the investments in IoT security. The survey focused on the four industry verticals where IoT was most mature — industrial, consumer products, healthcare, and transportation — and sampled companies of all sizes, with the median size being 3,000 employees. The survey asked what objective companies were trying to achieve with IoT. The top responses were operational efficiency, customer experience, increased revenue, and business agility. It’s been my experience that businesses that are early in the adoption cycle of IoT are looking to cut costs through automation, which leads to better efficiency, but they quickly pivot to customer experience as a way of creating new revenue streams.


Ahead of Black Friday, Rash of Malware Families Takes Aim at Holiday Shoppers

“The malware can intercept input data on target sites, modify online page content, and/or redirect visitors to phishing pages,” Kaspersky Lab researchers noted in a posting on Thursday, one week ahead of Thanksgiving. They added that the malicious code, once installed often lies in wait for the consumer to visit an e-commerce page, and then simply grabs the payment form wholesale. “Form-grabbing is a technique used by criminals to save all the information that a user enters into forms on a website,” the team noted. “And on an e-commerce website, such forms are almost certain to contain: login and password combination as well as payment data such as credit card number, expiration date and CVV. If there is no two-factor transaction confirmation in place, then the criminals who obtained this data can use it to steal money.” Armed with the stolen credentials, cybercriminals could hawk them on the Dark Web, or simply use the stolen accounts themselves – they can buy things from a website using victims’ credentials, and then resell the ill-gotten goods to make a nice profit – a process that comes with built-in money-laundering.



Quote for the day:


"The ultimate measure of a man is not where he stands in moments of comfort, but where he stands at times of challenge and controversy." -- Martin Luther King,


Daily Tech Digest - November 15, 2018

1 tsunami
Every technological advance can and will be exploited at some point, but if we think before we quickly push devices out into consumer’s and corporation’s hands – if we build security and privacy in to start with – we’ll have a better handle on what can go wrong. Take medical devices, for instance. Per a recent study by Trend Micro, more than 100,000 medical devices were discovered to be insecure. Think of an infusion pump precisely monitoring the flow of a lifesaving fluid into your loved one. Don’t think it can be hacked and the dosage changed? Think it doesn’t happen? The HIPAA journal recently featured a study done by Vanderbilt University that suggested healthcare data breaches cause 2,100 deaths a year. Was this IoT related? I don’t know, but the evidence of what can happen with unmanaged, unsecure IoT is powerful and must be addressed. So, where to now? Want to learn more about IoT? It really applies to everything: medicine, health, transportation, smart cities and smart homes.


How to add IoT functions to legacy equipment

vintage voltmeter gauge
The hardest part of bringing the IoT to older systems seems to be dealing with the unique, one-off characteristics of each legacy situation — often without accurate documentation. “Older equipment sometimes requires a necessary, unique design step in each individual case,” Flynn says. The key, he adds, is to avoid disrupting the existing control scheme and operations of the legacy system. “We have to be careful not to create new issues. If the legacy system uses an older communication protocol, then we have to ensure not to overload any bandwidth or processor,” he says. If that’s not possible, using new IoT sensors requires selecting the right new IoT sensors and instrumentation to solve a particular problem. That, in turn, requires a higher level of operational technology expertise. But that’s only part one, Flynn says. You still have to network into an existing IT infrastructure, often using a combination of edge devices and sensors. New Wi-Fi connections may be needed.


Elastic tackles containers and APM in the new 6.5 release

elastic.png
As Elastic adds capabilities for supporting the new forms of deployments, largely cloud-native, involving containers and serverless infrastructure, another theme of the new release is going higher up the stack and ramping up competition with, as opposed to complementing, APM vendors. The new release of Elastic APM allows users to correlate data on application performance with infrastructure logs, server metrics, and security events to identify bottlenecks. In itself, this capability overlaps those of APM vendors. APM vendors have built their IP over the years understanding how to abstract low level log readings from the standpoint of application processes making their way through IT infrastructure. A major difference form Elastic is that the APM crowd built their expertise in the walled gardens of data center deployments. By contrast, Elastic was not necessarily engineered for the cloud, but its scale-out, big data architecture made it a natural for the cloud.


Terraform orchestration matures as multi-cloud lingua franca

Terraform 0.12 makes remote state storage available free to users of the open source edition as well. Without this feature, multiple IT administrators might overwrite one another's infrastructure code or lack a single "source of truth" for infrastructure configurations. With 0.12, HashiCorp established a SaaS remote state management product for open source users that can indefinitely store an unlimited amount of state information. Terraform 0.12 also revamps the HashiCorp Configuration Language (HCL), its domain-specific language for infrastructure code, to make it more consistent and easy to use. Enterprise IT shops already favor Terraform orchestration for multi-cloud microservices management but said there was a time when ease of use was an issue. "Terraform has been instrumental for us to tame the chaos of multiple clouds and data centers," said Zack Angelo, director of platform engineering at BigCommerce, an e-commerce company based in Austin, Texas. "But in the past, if you weren't on Terraform Enterprise, migrating a state file was a pain point ..."


Global Family Business Survey 2018


The release of our ninth PwC Global Family Business Survey comes at a time of extraordinary transformation. Digital technology is disrupting whole industries; sustainability is becoming central to the conduct of business; in the corporate and financial worlds, winning trust is more important than it’s ever been; and millennials represent an enduring demographic change. After surveying nearly 3,000 family businesses across 53 territories, we were able to prove that family businesses - built around strong values and with an aspirational purpose - have a competitive advantage in disruptive times, that pay off in real terms. Therefore we believe there is an enormous opportunity for family businesses to start generating real gains from their values and purpose by adopting an active approach that turns these into their most valuable asset.


How Kubernetes is becoming a platform for AI

Xinglang Wang, a principal engineer at eBay, said AI had a high barrier to entry, but packaging tools in a Kubernetes cluster made it easier for businesses to get started on an AI project. At eBay, he said Kubernetes was used to create a unified AI platform, which enables data sharing and sharing of AI models. The AI platform also provides automation to enable eBay to train and deploy AI models.  One of the big users at the KubeCon Shanghai event was Chinese e-commerce retailer JD.com. Explaining the use of AI at JD.com, principal architect Yuan Chen described how the the company was running one of the largest Kubernetes clusters in the world. While it was traditionally used to support a microservices architecture, he said: “Everything is now driven by AI, so we have to use Kubernetes for AI. It is the right infrastructure for deep learning to train the AI models. AI scientists are expensive, so they should focus on their algorithms and not have to worry about deploying containers.”


The Linux desktop: With great success comes great failure

Missed target.
First, while the major Linux companies — Canonical, Red Hat and SUSE — all support Linux desktops, they all decided early on that the big money was to be made with servers (and nowadays with containers and the cloud). The biggest Linux players determined that the Linux desktop was a small market — and then they did very little to change that. But there’s more to it than that. The Linux desktop has also been plagued by fragmentation. There is no one Linux desktop; there are dozens, and they are not at all alike. There’s the Debian Linux family, which includes Ubuntu and Mint; the Red Hat team, with Fedora and CentOS; Arch Linux;Manjaro Linux; and numerous others. And then there are the desktop interfaces. Personally, as a dedicated Linux desktop user for decades, I love that I have a choice between GNOME, KDE Plasma, Cinnamon, Xfce, MATE, etc. for my desktop interface. But most people just find it confusing. All of that just scratches the surface.


GPS killer? Quantum 'compass' promises satellite-free navigation

quantumcompassimperialnov18.jpg
The transportable quantum accelerator could address GPS's dependence on satellite signals, which can be jammed or spoofed by an attacker, rendering the system useless for navigational information. Instead of using GPS, scientists from Imperial College London and UK laser instrument maker M Squared have demonstrated a way to measure how super-cooled atoms respond when inside an accelerating vehicle. Accelerometers are used for navigation, but as the researchers explain, they quickly lose accuracy over time unless aided by satellite signals. The satellite-free navigational device they created relies on M Squared's laser, which cools atoms in a chamber to the point where they behave in a quantum way, as both matter and waves. When a vehicle carrying the device moves, the wave properties of the cooled atoms are affected by its acceleration. A laser beam that acts as an 'optical ruler' measures how atoms move over time.


Zero-trust security not an off-the-shelf product


Zero trust is a “business enabler” because, done correctly, it enables businesses to be faster more quickly and more securely because it is a combination of processes and technologies, he said. “Security is improved because it effectively blocks lateral movement within organisations.” It is widely recognised that complexity is the enemy of security because it encourages end-users and business leaders to bypass security, said Simmonds. “The zero-trust model once again improves security by reducing complexity, and if you get it right, it works for everyone, including business partners, by providing a unified experience with greater flexibility and productivity,” he said. On the other hand, zero trust is not about trusting no one, said Simmonds, it is not a “next-generation perimeter” and it is not “VPN modernisation”. “It is not an off-the-shelf product,” he said.


Understanding the CEO’s role early in digital transformation programs

2 ceo
First, the CEO should be marketing the mission. It must be repeated to leaders and employees several times and the CEO should help answer several key questions. Why must the organization pursue the defined digital business strategy? What are the issues with the existing business model? Who are the new competitors that are disrupting existing businesses, products, and services? What markets is the organization targeting? What are the new and emerging customer needs and expectations? Why technology is critical for future success? These communications should always end with some of the short-term goals of the program and how people can participate. The CIO and others on the leadership team also be communicating and answering these questions, but the staff wants to know and see that the CEO is truly behind it and driving it. With a strategy and mission defined, their needs to be clarity on how the program is being led and how responsibilities are aligned.



Quote for the day:


"A leader must have the courage to act against an expert's advice." -- James Callaghan


Daily Tech Digest - November 14, 2018

Despite rise in security awareness, employees’ poor security habits are getting worse

poor security habits are getting worse
Efforts to get around IT may not necessarily be done with malicious intent, but the reality is they directly increase IT risk for the organization. For example, 13% of employees admitted they would not immediately notify their IT department if they thought they had been hacked. Further compounding this issue is a workforce that tends not to understand the role of all employees in keeping an organization secure, as 49% of respondents would actually blame the IT department for a cyberattack if one occurred as a result of an employee being hacked. However, it’s not just today’s employees exposing organizations to risk. As the digital transformation blurs the traditional security perimeter with cloud apps, it is also redefining the definition of a “user.” Enterprises are increasingly adopting software bots powered by robotic process automation (RPA), and granting them access to mission-critical applications and data, like their human counterparts.


GPUs are vulnerable to exploitation
A side-channel attack is one where the attacker uses how a technology operates, in this case a GPU, rather than a bug or flaw in the code. It takes advantage of how the processor is designed and exploits it in ways the designers hadn’t thought of. In this case, it exploits the user counters in the GPU, which are used for performance tracking and are available in user mode, so anyone has access to them. 3 types of GPU attacks All three attacks require the victim to download a malicious program to spy on the victim’s computer. The first attack tracks user activity on the web, since GPUs are used to render graphics in browsers. A malicious app uses OpenGL to create a spy program to infer the behavior of the browser as it uses the GPU. The spy program can reliably obtain all allocation events of each website visited to see what the user has been doing on the web and possibly extract login credentials. In the second attack, the authors extracted user passwords because the GPU is used to render the login/password box. Monitoring the memory allocation events leaked allowed for keystroke logging.


Defences based on Lockheed Martin’s cyber kill chain were mainly aimed at preventing reconnaissance, weaponising, delivery and exploitation, said Tolbert, with detection and response only required at the malware installation, callback and execution phases of the kill chain. While this is still a valid approach, he said the Mitre framework was more up to date and more realistic, with prevention mentioned only in connection with the initial access and execution phases, while detection and response is specified with regard to the eight other phases, including privilege escalation, credential theft, lateral movement and exfiltration. “These frameworks are useful in helping organisations to plan where they need to do work, and while prevention always will be important, there has been a shift in emphasis to detection and response. We believe artificial intelligence and machine learning [ML] can help in making this shift,” said Tolbert. 


Managing Change in the Face of Skepticism

To be sure, skepticism can play a positive role in organizations. It helps companies make better decisions. It can build tight trading algorithms and sturdy satellites. Doubting hockey-stick growth and suspicious data helps companies make better capital allocation choices. When skepticism is constructive, it can be leveraged to evaluate a change effort’s benefits, and build employee enthusiasm for it. Cynicism is a different matter. It often stems from a history of failed programs or lack of management credibility. Cynicism breeds distrust and pessimism, so if it is present, transformation efforts must restore credibility before moving to the next step of the plan. Regardless of an organization’s culture, business leaders should avoid strong-arming change — recent failed transformation efforts have shown the pitfalls of that approach. When leading a transformation in a skeptical culture, look within and leverage the skepticism to move forward.


Why the Artificial Intelligence Era Requires New Approaches to Create AI Talent


With the horizons dictating artificial technology expected to broaden in the future, one can tell that AI talent will have an imperative role to play in company performance. The talent that a company possesses in the field of AI dictates how well they’re able to manage the analytics for the future. The best AI talent in the market will realize the performance of different models and harness their potential to help them perform at their full potential. This knowledge is what AI companies will crave in the future. As the age of AI kicks in, the management philosophy will also change. While previously managements were involved in routine decision making and innovations, the AI age will define how organizations now rely more on their top talent to define and lead innovation. The innovation that the workforce inside an organization brings would be the differentiating factor for all forms of AI companies. Their workers would help propel them forward and foster innovation for them.


Mastering data governance initiatives in the age of IIoT

Most IIoT gadgets both send and receive information about processes that occur within the scope of the businesses that use them. However, concerning distributed data, companies must ensure it doesn't reveal information to recipients that could highlight trade secrets. For example, many IIoT sensors track various actions that happen in assembly lines. If recipients can extract details from information that tells them how companies go about making their products and what helps them stand out, businesses will discover their operations are not sufficiently locked down from outside parties. While some of those entities might not seek gain from the information, others may try to mimic certain practices. When that happens, the increased competitiveness mentioned above becomes less prominent and may no longer be relevant at all. However, keeping sensitive information secret is not straightforward. That's because it takes substantial forethought to figure out how to spend money on IIoT equipment that works seamlessly together.


How to Prevent Data Leaks in a Collaborative World 


MyWorkDrive is a secure data access and collaboration solution. It does not require the organization to copy all its data to a cloud provider, and it does not require users to access data via VPN. Data is shared and collaborated on in place. The customer installs the MyWorkDrive server software on a server within their environment. IT then points the MyWorkDrive server to the existing fileserver mount points to which it wants to allow access. The solution moves no data in the process. Users can directly access data through the MyWorkDrive WebClient, native desktop client, iOS or Android app. The software initially presents shared files in a browser window. MyWorkDrive administrators designate the actions users can take on those shared files. The administrator can remove the ability to download the file, to copy data to the clipboard and to take screenshots. The administrator can also watermark the files which should discourage a user from taking a picture of the screen with a smartphone.


Why cryptojacking malware is a bigger threat to your PC than you realise

That's because cryptocurrency miners give attackers a foothold into PCs which can be exploited to deliver more damaging malware in future, security firm Fortinet has warned in its latest threat landscape report - noting that underestimating cryptojacking places organisations under heightened risk. "What we're finding out is that this particular malware also has other nefarious activities that it does while it's mining for cryptocurrency," Anthony Giandomenico, senior security researcher at Fortinet's FortiGuard Labs told ZDNet. "It will disable your antivirus, open up different ports to reach out to command and control infrastructure, it can download other malware. Basically, it's reducing or limiting your security shields, opening you up to lots more different types of attacks". A number of examples of cryptocurrency miners packing an additional punch have already been spotted in the wild: PowerGhost alters how systems perform scans and updates, while also disabling Windows Defender.


Did IBM overhype Watson Health's AI promise?

ibm watson health
While IBM faces declining revenue overall, and its recently released third-quarter earnings showed revenue from cognitive offerings was down 6% from last year, Watson Health saw growth, according to Barbini. He noted that IBM does not release numbers specific to Watson Health for "competitive reasons." Barbini admitted that developing Watson Health and, specifically, Watson for Oncology is not an easy task, but it remains an important one. "That's why IBM dove into it three years ago. Did you really think oncology would be mastered in three years?" Barbini said. "However, let's look at the facts. More than 230 hospitals are using one of our oncology tools. We've had 11 [software] updates over last year and half and we've doubled the number of patients we've reached to over 100,000 as of the end of the third quarter of this year." Earlier this month, the head of Watson Health for the past three years, Deborah DiSanzo, stepped down and Kelly took over. DiSanzo is continuing to work with IBM Cognitive Solutions' strategy team, according to a company spokesperson.


Cisco fuses SD-WAN, security and cloud services

city / traffic / street / light trails / speed / progress / Wan Chai, Hong Kong
What Cisco is doing is adding support for its Umbrella security system to its SD-WAN software which runs on top of the IOS XE operating system that powers its core branch, campus and enterprise routers and switches. Cisco describes Umbrella as a cloud-delivered secure internet gateway, that stops current and emergent threats over all ports and protocols. It blocks access to malicious domains, URLs, IPs, and files before a connection is ever established or a file downloaded. It basically protects customers and communications at the Domain Name Server (DNS) layer.  Umbrella’s key features come from OpenDNS which Cisco bought for $635 million in 2015. OpenDNS offers a cloud service that prevents customers from connecting to dangerous internet IP addresses such as those known to be associated with criminal activity, botnets, and malicious downloads. “Umbrella blocks access to malicious destinations before a connection is ever established, and it is backed by the threat intelligence of Cisco Talos,” Prabagaran said. 



Quote for the day:


"If you find a path with no obstacles, it probably doesn't lead anywhere." -- Frank A Clark