Daily Tech Digest - November 30, 2018

Man-in-the-middle attacks: A cheat sheet

cybersecurityistock-952069328utah778.jpg
The concept behind a man-in-the-middle attack is simple: Intercept traffic coming from one computer and send it to the original recipient without them knowing someone has read, and potentially altered, their traffic. MITM attacks give their perpetrator the ability to do things like insert their own cryptocurrency wallet to steal funds, redirect a browser to a malicious website, or passively steal information to be used in later cybercrimes. Any time a third party intercepts internet traffic, it can be called a MITM attack, and without proper authentication it's incredibly easy for an attacker to do. Public Wi-Fi networks, for example, are a common source of MITM attacks because neither the router nor a connected computer verifies its identity. In the case of a public Wi-Fi attack, an attacker would need to be nearby and on the same network, or alternatively have placed a computer on the network capable of sniffing out traffic.


Technical Debt Will Kill Your Agile Dreams

Bad engineering decisions are in a different category to ones that were tactically made with full knowledge that the short-term priority was worth it. When it's clear that such a decision was, in fact, a tactical decision, it is much easier to convince people that refactoring needs to happen and the debt has to be paid off. Unfortunately, when the term is used as a polite way of saying bad engineering, it's unlikely there is any repayment strategy in place and it is even harder to create one because first, you need to convince people there is some bad engineering, then you need to convince people it is causing problems, then you have to consider a better approach and then convince various stakeholders of that too. Finally, you need to convince the investment is needed to refactor. It is like trying to win 5 matches in a row away from home when you don't even have your best players. 


3 Keys to a Successful “Pre-Mortem”


The concept of a pre-mortem has been around for years, but only recently have we seen it pick up speed in the engineering community. This is an activity which is run before starting on a big stage in a project, but after doing a product mapping and prioritization activity. Rather than exploring what went wrong after the fact and what to do differently in the future, the goal of a premortem is to identify potential pitfalls and then apply preventative measures. It’s a great idea, but for those new to the concept, it’s easy to overlook some important aspects of the process. To talk about what might go wrong is scary. It acknowledges many things are out of our control, and that we might mess up the things which are within our control. To talk about what might go wrong, and how to adapt to it, acknowledges the possibility of failure. As this is a rare thing in industry, if done initially outside of a structured activity, this can seem like trying to weasel your way out of work.



12 top web application firewalls compared

AWS WAF by itself does not offer the same sort of features you could expect from other solutions on this list, but coupled with other AWS solutions AWS WAF becomes as flexible as any competing solution. Existing AWS customers will see the most value in selecting AWS WaF due to the architecture benefits of staying with a single vendor. ... Each architecture comes with its own set of pros and cons, varying from the simplicity of the SaaS option to the fine-grained control over configuration and deployment with the appliance-based offerings. Barracuda’s various configurations offer very similar functionality, though there are some differences here and there. Server cloaking limits the amount of intel a potential attacker can gain on your configuration by hiding server banners, errors, identifying HTTP headers, return codes, and debug information. Server cloaking is available on all versions of the web application firewall, as is DDoS protection.


Creating a Turing Machine in Rust


A Turing machine is a mathematical model of computation that reads and writes symbols of a tape based on a table rules. Each Turing machine can be defined by a list of states and a list of transitions. Based on a start state (s0), the Turing machine works its way through all the states until it reaches a final state (sf). If no transition leads to the final state, the Turing machine will run ‘forever’ and eventually run into errors. A transition is defined by the current state, the read symbol at the current position on the tape, the next state and the next symbol that must be written to the tape. Additionally, it contains a direction to determine whether the head of the tape should move the left, right or not at all. To visualize this process, let’s take a look at a very simple Turing machine that increments the value from the initial tape by one. ... While this is a very simple Turing machine, we can use the same model to create machines of any complexity. With that knowledge, we are now ready to lay out the basic structure of our project.


Tech support scammers are using this new trick to bypass security software

Symantec describes this kind of attack technique as 'living off the land', whereby attackers exploit legitimate features in systems to hide malicious activity. In of itself obfuscation isn't malicious, but it can be used for malicious purposes. "There are many open source tools to obfuscate code as developers don't want their code to be seen by the users of their software. Similar is the case with encryption algorithms like AES. Such algorithms have wide usage and implementations in the field of data security," said Siddhesh Chandrayan, threat analysis engineer at Symantec. "Both these mechanisms, by themselves, may not generate an alarm as they are legitimate tools. However, as outlined in the blog, scammers are now using these mechanisms to show fake alerts to the victims. Thus, scammers are 'living off the land' by using 'inherently non-malicious' technology in a malicious way," he added.


Standout predictions for the cloud – a CTO guide

Standout predictions for the cloud – a CTO guide image
“Many businesses have previously shied away from true multi-cloud deployments by favouring public infrastructures due to the perceived expense of private platforms, rooted in the required expertise necessary to run them. However, recent technological developments that enable businesses to take a highly-automated approach have shown that this is now an outdated view of cloud infrastructure. When it comes to transforming with cloud technologies, multi-cloud is proving itself to be the correct endgame for businesses in all industries.” ... “Enterprises are eliminating all the “state” from their endpoint devices, where any changes are stored only temporarily on the device and are quickly and efficiently on-ramped to the organisation’s cloud. “One key benefit, aside from IT efficiency gains, is that it represents an elimination of the “dark data” that was previously stored in employees’ laptops or desktops. Suddenly, all this “dark” data is right at your fingertips – stored in the cloud– as a searchable, analysable and shareable repository.”



Typemock vs. Google Mock: A Closer Look

Writing tests for C++ can be complicated, especially when you are responsible for maintaining legacy code or working third-party APIs. Fortunately, the C++ marketplace is always expanding, and you have several testing frameworks to choose from. Which one is the best? In this post, we'll consider Typemock vs. Google Mock. We'll use Typemock's Isolator++ and Google Mock, the C++ framework that is bundled with Google Test, to write a test function for a small project. As we implement the tests, we'll examine the difference in how the frameworks approach the same problem. ... Fowler defines an order object that interacts with a warehouse and mail service to fill orders and notify clients. He illustrates different approaches for mocking the mail service and warehouse so the order can be tested. This GitHub project contains Fowler's classes implemented in C++ with tests written in Google Mock. Let's use those classes as a starting point, with some small changes, for our comparison.


Caching can help improve the performance of an ASP.NET Core application. Distributed caching is helpful when working with an ASP.NET application that’s deployed to a server farm or scalable cloud environment. Microsoft documentation contains examples of doing this with SQL Server or Redis, but in this post,I’ll show you an alternative. Couchbase Server is a distributed database with a memory-first (or optionally memory-only) storage architecture that makes it ideal for caching. Unlike Redis, it has a suite of richer capabilities that you can use later on as your use cases and your product expands. But for this blog post, I’m going to focus on it’s caching capabilities and integration with ASP.NET Core. You can follow along with all the code samples on Github. ... No matter which tool you use as a distributed cache (Couchbase, Redis, or SQL Server), ASP.NET Core provides a consistent interface for any caching technology you wish to use.


7 reasons why artificial intelligence needs people


As AI projects roll out over the next few years, we will need to rethink the definition of the “work” that people will do. And in the post-AI era the future of work will become one of the largest agenda items for policy makers, corporate executives and social economists. Despite the strong and inherently negative narrative around the impact on jobs, the bulk of the impact from the automation of work through AI will result in a “displacement” of work not a “replacement” of work – it’s easy to see how the abacus-to-calculator-to-Excel phenomenon created completely new work around financial planning and reporting, and enterprise performance management. Similarly, AI will end up accelerating the future of work and resulting displacement of jobs will be a transition already in place, not an entirely new discussion. As some work gets automated other jobs will get created, in particular ones that require creativity, compassion and generalized thinking.



Quote for the day:


"A single question can be more influential than a thousand statements." -- Bo Bennett


Daily Tech Digest - November 29, 2018

Closing the Awareness Gap in Technology Projects


The symptoms of a problem with operational awareness can vary. Sometimes you fail to obtain visibility at the level of accuracy you need; sometimes you get that visibility, but don’t know how to act on it; sometimes, even when insights lead to actions, these actions fail to lead to your desired results. If you’re trying, for example, to reduce time delays, your data analytics might show which parts of your project are moving more slowly than expected, but they’re unlikely to pinpoint the precise reason. Problems in one place might be the result of decisions made several steps back in the supply chain or project life cycle. Was planning off? Did procurement write a poor contract? Maybe your workers lack the necessary skills? The experience of using the system may also make it difficult for you and your employees to make sense of the data effectively. For example, in our work with boards of directors, who are taking a growing role in overseeing high-value projects, we sometimes observe members relying heavily on dashboards or documents developed with sophisticated data analytics. 



Three steps toward stronger data protection

Applications responsible for originally sourcing data into the system or modifying data as part of business transactions should also be responsible for digitally signing data before persisting them into databases. Any application retrieving such data for business use must verify the digital signature before using the data or refuse to use data whose integrity has been compromised. These are concrete steps companies can begin to take immediately to protect themselves. Enabling FIDO within a few weeks into web applications is now possible; incorporating encryption with secure, independent key management systems into applications can be accomplished within a few months. Integrating digital signatures may be accomplished at the same time as encryption or pursued as a subsequent step. By enabling these security controls, companies place themselves far, far ahead of where the vast majority of attacks currently occur.


Google Faces GDPR Complaints Over Web, Location Tracking
Even though Location History is off by default, Google appears to encourage its users to turn it on through overly simplified and carefully designed user interfaces that may drive users to hit "approve." In contrast to the ease of enabling the feature, any user who wants to research what their choice might mean must undertake extra clicks or explore multiple submenus, Forbrukerrådet's report contends. These design choices may contradict GDPR's requirement for "specific and informed" consent, Forbrukerrådet says. "Users will often take the path of least resistance in order to access a service as soon as possible," the report says. "Making the least privacy friendly choice part of the natural flow of a service can be a particularly effective dark pattern when the user is in a rush or just wants to start using the service." Forbrukerrådet contends that if users don't click on Location History at the start, Google keeps trying to get them to enable it. For example, the report contends that in order to keep location-tracking disabled, users must again decline it when trying to use Google's Assistant, Maps and Photos apps.


Data Science “Paint by the Numbers” with the Hypothesis Development Canvas

The one area of under-invested in most data science projects is the thorough and comprehensive development of the hypothesis or use case that is being tested; that is, what it is we are trying to prove out with our data science engagement and how do we measure progress and success.  To address these requirements, we developed the Hypothesis Development Canvas – a “paint by the numbers” template that we will populate prior to executing a data science engagement to ensure that we thoroughly understand what we are trying to accomplish, the business value, how we are going to measure progress and success, what are the impediments and potential risks associated with the hypothesis. The Hypothesis Development Canvas is designed to facilitate the business stakeholder-data science collaboration


6 Tips To Frame Your Digital Transformation With Enterprise Architecture


Call it digital transformation strategy—call it smart business—enterprise architecture is a method your company can use to organize your IT infrastructure to align with business goals. This isn’t a new concept. In fact, enterprise architecture has been around since the 1960s. But the overwhelming presence of tech in every facet of business today has forced us to rethink it, and to make it a more central focus of business management. ... Enterprise architecture deals with your organizational structure, business model, apps, and data just as much as it does information technology. When you put it together, you need to think from an employee perspective, a customer perspective, and from the perspective of meeting your business goals. After all, your digital transformation will impact your entire company, and your enterprise architecture will need to support it. Your enterprise architecture is of no use to anyone if no one but IT geeks can understand it. When you develop it, use common language. Create easy-to-understand examples.


Machine learning and the learning machine with Dr. Christopher Bishop

The field of AI is really evolving very rapidly, and we have to think about what the implications are, not just a few years ahead, but even further beyond. I think one thing that really characterizes the MSR Cambridge Research lab is that we have a very broad and multi-disciplinary approach. So, we have people who are real world experts in the algorithms of machine learning and engineers who can turn those algorithms into scalable technology. But we also have to think about what I call the sort of penumbra of research challenges that sit around the algorithms. Issues to do with fairness and transparency, issues to do with adversaries because, if it’s a publication, nobody is going to attack that. But if you put out a service to millions of people, then there will be bad actors in the world who will attack it in various ways. And so, we now have to think about AI and machine learning in this much broader context of large scale, real-world applications and that requires people from a whole range of disciplines.


Cloudlets extend cloud power to edge with virtualized delivery


With a cloudlet, there tend to be fewer users and they connect over a private wireless network. Cloudlets are also generally limited to soft-state data, such as application code or cached data that comes from a central cloud platform. In some ways, cloudlets are more like private clouds than public clouds, especially when it comes to self-management. With both cloudlets and private clouds, organizations deploy and maintain their own environments and determine the delivery of services and applications. Cloudlets also limit access to a local wireless network, whereas private clouds are available over the internet and other WANs to support as many users as necessary -- although nowhere near the number of users public clouds support. The private cloud theoretically serves users wherever they reside, whenever they need and from any device capable of connecting to the applications. In contrast, cloudlets are specific to mobile and IoT devices in close proximity.


KingMiner malware hijacks the full power of Windows Server CPUs

The malware generally targets IIS/SQL Microsoft Servers using brute-force attacks in order to gain the credentials necessary to compromise a server. Once access is granted, a .sct Windows Scriptlet file is downloaded and executed on the victim's machine. This script scans and detects the CPU architecture of the machine and downloads a payload tailored for the CPU in use. The payload appears to be a .zip but is actually an XML file which the researchers say will "bypass emulation attempts." It is worth noting that if older versions of the attack files are found on the victim machine, these files will be deleted by the new infection. Once extracted, the malware payload creates a set of new registry keys and executes an XMRig miner file, designed for mining Monero. The miner is configured to use 75 percent of CPU capacity, but potentially due to coding errors, will actually utilize 100 percent of the CPU. To make it more difficult to track or issue attribution to the threat actor, the KingMiner's mining pool has been made private and the API has been turned off.


Managing a Real-Time Recovery in a Major Cloud Outage

system fail situation in network server room
While Always On Availability Groups is SQL Server’s most capable offering for both HA and DR, it requires licensing the more expensive Enterprise Edition. This option is able to deliver a recovery time of 5-10 seconds and a recovery point of seconds or less. It also offers readable secondaries for querying the databases (with appropriate licensing), and places no restrictions on the size of the database or the number of secondary instances. An Always On Availability Groups configuration that provides both HA and DR protections consists of a three-node arrangement with two nodes in a single Availability Set or Zone, and the third in a separate Azure Region. One notable limitation is that only the database is replicated and not the entire SQL instance, which must be protected by some other means. In addition to being cost-prohibitive for some database applications, this approach has another disadvantage. Being application-specific requires IT departments to implement other HA and DR provisions for all other applications.


Reputational Risk and Third-Party Validation

Security ratings are increasingly popular as a means of selecting and monitoring vendors. But Ryan Davis at CA Veracode also uses BitSight's ratings as a means of benchmarking his own organization for internal and external uses. "Taking somebody's word for it isn't enough these days," says Davis, an Information Security Manager at CA Veracode. "You can't just say 'Oh, yeah, well that person said they're secure ..." For CA Veracode, security ratings provided by BitSight offer validation to prospective customers. "We want [customers] to be able to have that comfort that somebody else is also asserting that we're secure." In an interview about the value of security ratings, Davis discusses:
How he employs BitSight Security Ratings; The business value - internally and externally; and How these ratings can be a competitive differentiator. Davis is CA Veracode's Information Security Manager. He is responsible for ensuring the security and compliance of thousands of assets in a highly scalable SaaS environment. Davis has more than 15 years of experience in information technology and security in various industries.



Quote for the day:


"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward


Daily Tech Digest - November 28, 2018


Modern enterprises solutions are not only smarter but also require no physical infrastructure at all, making them more cost-effective than older technologies. The rise in the number of software-as-a-service (SaaS) based enterprise management products has consequently helped more and more entrepreneurs build digitized enterprises through the use of simple and efficient products. SaaS-based cloud application services do not require any storage, servers, databases, but offer greater capabilities such as inter-operability and easier customization. This allows service providers to integrate advanced technologies such as artificial intelligence, machine learning, data mining and analytics, etc. into their enterprise systems and business processes to unlock higher levels of productivity like never before.



The 10 most in-demand tech jobs of 2019

The tech jobs landscape of 2019 will likely look largely the same as it did in 2018, with roles in software development, cybersecurity, and data science dominating across industries. "Emerging technologies will be key catalysts for the in-demand jobs we expect to see in 2019," said Sarah Stoddard, community expert at job search site Glassdoor. "From artificial intelligence, automation, virtual reality, cryptocurrency and more, demand for jobs in engineering, product, data science, marketing and sales will continue to rise in order to support the innovation happening across the country." More and more often, traditional companies are beginning to resemble tech companies, and this trend will likely continue throughout the next year, Stoddard said. "As employers across diverse industries, from health care to finance to automotive and more, continue to implement various technologies to streamline workflows and boost business, the demand for top-notch workers who have a balance of technical and soft skills will continue to rise."


GDPR is encouraging UK IT directors to pay cyber ransoms


The Sophos study revealed that small businesses were least likely to consider paying a ransomware demand, with 54% of IT directors at UK companies with fewer than 250 employees ruling out paying their attackers, while just 11% of directors at companies with 500-750 employees said they would opt for this approach. The study, based on more than 900 interviews conducted by market research firm Sapio Research, also showed that UK IT directors are significantly more likely to pay up than their counterparts in other Western European countries. Of the five European countries studied, Irish IT directors were the least likely to pay. Just 19% said they would “definitely” be willing pay a ransom rather than a larger fine. IT directors in France, Belgium and the Netherlands were also less likely to pay a ransom, with only 33% of respondents in France, 24% in Belgium and 38% in the Netherlands saying they would “definitely” be willing to pay.


New Hacker Group Behind 'DNSpionage' Attacks in Middle East

"It's clear that this adversary spent time understanding the victims' network infrastructure in order to remain under the radar and act as inconspicuous as possible during their attacks," the Talos report noted. The new campaign is the second in recent months targeting Middle East organizations and is a sign of the recently heightened interest in the region among cyberattackers. In September, Check Point reported on new surveillance attacks on law enforcement and other organizations in Palestine and other Middle East regions by a group known as Big Bang. A Siemens report from earlier this year described organizations in the oil and gas sectors in the Middle East particularly as being the most aggressively targeted in the world. Half of all cyberattacks in the region are targeted at companies in these two sectors. According to Siemens, a startling 75% or organizations in these sectors have been involved in at least one recent cyberattack that either disrupted their OT network or led to confidential data loss.


Cisco predicts nearly 5 zettabytes of IP traffic per year by 2022

Cisco predicts nearly 5 zettabytes of IP traffic per year by 2022
Cisco says that since 1984, over 4.7 zettabytes of IP traffic have flowed across networks, but that’s just a hint of what’s coming. By 2022, more IP traffic will cross global networks than in all prior “internet years” combined up to the end of 2016. In other words, more traffic will be created in 2022 than in the first 32 years since the internet started, Cisco says. One of the more telling facts of the new VNI is the explosion of machine-to-machine (M2M) and Internet of Things (IoT) traffic. For example M2M modules account for 3.1 percent of IP traffic in 2017, but will be 6.4 percent of IP traffic by 2022, said Thomas Barnett, director of service provider thought leadership at Cisco. By 2022, M2M connections will be 51 percent of the total devices and connections on the internet. A slew of applications from smart meters, video, healthcare monitoring, smart car communications, and more will continue to contribute to a significant growth in traffic. What that means is customers and service providers will need to secure and manage M2M traffic in new and better ways, Barnett said.


The journey to turning your organisation into a platform

A traditional organisation, which produces a product or service, can become a platform organisation that facilitates exchanges between producers, even its previous competitors, and consumers – it has swapped the means of production for the means of connection. Many platform organisations are now more valuable and durable than traditional companies. Consequently, firms and government agencies now investigate them in their annual strategy processes and innovation groups. So how do you make that journey from traditional “brownfield” organisation to one that can really benefit from the opportunity of being platform-centric? There are three phases to the journey: design, launch and grow: For traditional companies, the search for a platform business model starts outside in an emerging ecosystem, but should also relate to the value created in the existing business model, otherwise the organisation loses the potential competitive advantage of its relationships, intellectual property, products, services, domain knowledge, scale, data and so on.


Quantum Computing to Protect Data: Will You Wait and See or Be an Early Adopter?

Quantum Computing to Protect Data: Will You Wait and See or Be an Early Adopter?
One area of data protection that will be affected by quantum computing capabilities is encryption. You see, quantum computing will make current day encryption practices obsolete. The traditional Public Key Infrastructure (PKI) system used can easily come crashing down when public keys become vulnerable to attack by quantum machines. Instead of years to decipher codes, we could be down to minutes or even instantly. That changes life pretty darn dramatically. Just imagine all those security certificates issued for websites, emails and digital signatures to validate authentication becoming obsolete in a matter of minutes. We can already sense the drool from cyber criminals and adversarial nations. Here comes the “the sky is falling” talk, so here’s the disclaimer: we don’t expect this encryption calamity to happen tomorrow, but we do expect it to happen within our lifetime. It’s not unreasonable to think within a decade or so. The 10-15 year mark isn’t all too unreasonable, especially if you start taking into consideration study and standardization. But that’s the problem with any new technology: timing.


How better standards can decrease data security spending needs

Companies across a variety of industries are feeling the strain of increasingly savvy malware and other digital attacks that threaten data security – but it’s not just information that’s at risk. According to businesses, these attacks are also putting pressure on their budgets, with 92 percent of companies planning cyber security budget increases, according to a report by Enterprise Strategy Group. But can budgets keep up with growing security needs? Particularly for small businesses, the only option may be to standardize security practices to hold down costs. As in any industry, standardization makes it easier for companies to assess their needs, access appropriate tools, and can help reduce the cost of those tools overall. Data security, however, is a quickly changing field, creating a barrier to standardization. Recently, though, standardization at the highest levels, specifically starting with the federal government, has opened new doors for companies seeking cyber security solutions that don’t cost a fortune and work better than current approaches.


The need for data literacy

The need for data literacy
Thas been an explosion in the data available for decision making – marketing is no different. In fact, many would argue that being able to understand data, in particular customer data is now critical to success. For marketing to be truly successful marketers need to put the customer at the heart of everything, from the initial product or service design right through to delivery and after purchase support, therefore having a clear understanding of customer data at each critical point is a necessity. Because data is now so important it is often referred to ‘as the new oil’ or ‘the universal language of this fourth industrial revolution’. What is for sure is that the modern marketer needs to be able to ask questions of machines and use data to build knowledge, make decisions and communicate its meaning with board members or stakeholders. The ability to translate data into useable information that can drive and articulate more meaningful campaigns to audiences is a key skill for modern marketers.


Sentiment Analysis: What's with the Tone?


A typical use case is feedback analysis. Depending on the tone of the feedback — upset, very upset, neutral, happy and very happy — the feedback takes a different path in a support center. Sentiment analysis is indeed widely applied in voice of the customer (VOC) applications. For example, when analyzing responses in a questionnaire or free comments in a review, it is extremely useful to know the emotion behind them in addition to the topic. A disgruntled customer will be handled in a different way from an enthusiastic advocate. From the VOC domain, the step to applications for healthcare patients or for political polls is quite short. Similarly, the number of negative vs. positive comments can decide the future of a YouTube video or a Netflix movie. How can we extract sentiment from a text? Sometimes even humans are not that sure of the real emotion when reading between the lines. Even if we manage to extract the feature associated with sentiment, how can we measure it?



Quote for the day:


"An entrepreneur without funding is a musician without an instrument." -- Robert A. Rice Jr


Daily Tech Digest - November 27, 2018

Mass data fragmentation requires a storage rethink
It’s been estimated that up to 60 percent of secondary data storage is taken up by copies, needlessly taking up space and cost and raising risk. Worse, there is no re-purposing of the data for other use cases, such as test/develpment (where frequent copies of data are made for developers to test or stage their apps) or analytics (where data is copied and centralized in a lake or warehouse to run reports against). Today’s distributed, mobile organizations and easy access to cloud services mean there are more options than ever for data to be stored in multiple locations – perhaps without IT’s knowledge or control. And with the advent of edge computing and the Internet of Things (IoT), some data will never move from its edge location but will need to be managed in situ, away from conventional infrastructure and control. The specialized and siloed nature of secondary infrastructure and operations means IT is burdened with extra Opex and organizational overhead just to "keep the lights on," as well as extra cycles for coordination across functions to meet SLAs, recover from failures, manage upgrade cycles, troubleshoot support issues, and so on.



How to avoid the coming cloud integration panic

Enterprises typically don’t think about data, process, and service integration until there is a tactical need. Even then, they typically get around the issues by pulling together a quick and dirty solution, which typically involves FTP, a file drop, or even Federal Express. The result of all this is that a lot of integration between the cloud and on-premises systems remains undone, be it data integration, process integration, or service integration. This will become a crisis in 2019 for many enterprises, because they can spend the entire year, or more, just pulling together integration solution for their public cloud systems—which they now depend on for some mission-critical processes. To avoid that crisis, here’s what you need to do. First, catalog all data, services, and processes, using some sort of repository to track them all.. You need to do this for all on-premises systems and all public cloud systems, and you need to do so with the intent of understanding most of the properties so you can make sure the right things are talking to the right things.


TLA calls on tech industry to hire one million tech workers by 2023


TLA suggested increasing the amount of funding for female-founded businesses to increase diversity in the city’s tech sector, and recommended encouraging women to join investment firms to push up the likelihood of funding for female-led firms. Linda Aiello, senior vice-president of international employee success at Salesforce, said the “cognitive diversity” of teams created by having a mix of talent will help firms to better reflect their customers, and considering diversity in the tech industry is not only becoming “increasingly important” for product design, but should be considered at all levels of a company. “The technology sector, like almost every other industry, faces a diversity gap,” she said. “This is an issue that’s felt across all organisations and all sectors and it crosses so many threads from gender and race to religion, sexuality and socio-economic backgrounds – each of which contributes to the cognitive diversity of a team.” 


Researchers Use Smart Bulb for Data Exfiltration

For their experiment, the researchers used the Magic Blue smart bulbs, which work with both Android and iOS, and which rely on Bluetooth 4.0 for communication. The devices are made by a Chinese company called Zengge, which claims to be a supplier for brands such as Philips and Osram.  The bulbs are marketed as supporting Bluetooth Low Energy (Bluetooth LE or Bluetooth Smart) and the researchers focused on those using the Low Energy Attribute Protocol (ATT). Some of the bulbs are only Bluetooth Smart Ready, the researchers said.  The bulbs use Just Works as pairing method, which allowed Checkmarx to sniff the communication with the mobile application used for control. The Android application, the company discovered, works with other bulbs that have the same characteristics as well.  The researchers paired the mobile phone running the iLight app with the smart bulb and started controlling the device, while also attempting to capture the traffic.


How to implement Enterprise DevOps: 5 steps

istock-881484354.jpg
Under a traditional IT operating model, there are generally too many handoffs between teams, said John Brigden, vice president of Amazon Web Services (AWS) Managed Services, during a Monday session at AWS re:Invent 2018. "You've got lots of handoffs when a change is made, or any kind of adjustment is made to the environment ... and that can result in loss of innovation, loss of speed, and a lot of other challenges the enterprise faces today," Brigden said during the session. The notion of DevOps and DevOps teams in general can also be flawed, he added. "You might have tens, even hundreds of DevOps teams in your environment, and if these DevOps teams are left to figure everything out for themselves—network configuration, security compliance, compliance with PCI, change management, automation, in addition to writing the application to achieve their business outcome —you can get to a place where you have a lot of non-standardization, a lot of complexity, and perhaps create an environment that could slow down what you're really trying to achieve," Brigden said.


Weren’t algorithms supposed to make digital mortgages colorblind?

Some online lenders, such as Upstart (which does not offer mortgages), have said their algorithms help reduce the cost of credit and give more people offers at better pricing than traditional lenders. Upstart uses “alternative” data about education, occupation and even loan application variables in its underwriting models. (For instance, people who ask for round numbers like $20,000 are a higher risk than people who ask for odder numbers like $19,900.) “A lot of variables that tend to be correlated with speed or lack of prudence are highly correlated with default,” Upstart co-founder Paul Gu said in a recent interview. “And indications that someone desperately needs the money right away will be correlated with defaults.” Such factors are less discriminatory than relying on FICO scores, which correlate to income and race, according to online lender. But in the mortgage area, it appears that bank and fintech lenders are baking traditional methods of underwriting into their digital channels.


It’s complicated: How enterprises are approaching IAM challenges


IAM is all of these things and more – and for those running security in the enterprise, it is clear that living with the multiplicity of IAM is par for the course because IAM is more than just identity provisioning or access governance or single sign-on (SSO) or any one of a long list of disciplines. The success, or otherwise, of identity management in companies today relies on moving from singular and isolated technical initiatives to a full IAM programme – or at least having a plan for such a journey. “If you had to single out a sector at the cutting edge of IAM, it’s financial services,” says Martin Kuppinger ... “That’s because finances need good protection – and regulators and the sector itself have long required secure digital identities and standardised processes. Yet that’s only one part of the IAM story now, because next to this security-first identity agenda is a parallel consumer-convenience move being driven by the large digital companies that are developing a different kind of expertise in consumer identity management.”


Pattern Recognition and Machine Learning

Download Bishop Pattern Recognition and Machine Learning 2006
This leading textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. This is the first machine learning textbook to include a comprehensive coverage of recent developments such as probabilistic graphical models and deterministic inference methods, and to emphasize a modern Bayesian perspective. It is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. This hard cover book has 738 pages in full colour, and there are 431 graded exercises. Solutions for these exercises and extensive support for course instructors are provided on Christopher Bishop’s page. Now available to download in full as a PDF.


Hiring tips: 9 secrets to working with IT recruiters

Hiring tips: 9 tips for working with IT recruiters
You can’t expect recruiting professionals, whether internal or external, to find the best talent if you’re not one hundred percent honest and open about the available role or roles, what you’re looking for, your timeline, what you’re willing to pay and the amount of competition for the vacancy, says Mondo’s Zafarino. “One thing that is key from the recruiter’s perspective is having full transparency from the CIO or IT hiring manager,” Zafarino says. “If there are internal candidates in the running, too; if you’re using other agencies as well, that’s fine. But you must communicate this to your recruiting partner. Let them know where your budget approval stands, or if you’re still working on getting the resources. And the most important thing is allocating the right amount of time for recruiters to fill the need. If it’s an urgent need, we’ll go full steam ahead, but if it’s a more passive potential hire then we’ll reallocate sources according to your needs and where you’re at in the process.”


Great Scrum Masters Are Grown, Not Born


Here's my assertion: Scrum Masters are Agile Coaches because they do what Agile Coaches at the program level do; they just do it within the scope of one or a few teams. They need all the skills and self-leadership that Agile Coaches at the program level need to be really effective for the teams they serve.  I am part of the working group ICAgile commissioned to refresh the Learning Path for Agile Coaching which was released earlier this year. When we got together, one of the main things we wanted to adjust in the community at large was this notion that a Scrum Master is somehow a less powerful role than Agile Coach or that it's even an administrative role that does not require a lot of skill. These were damaging applications of the roles that we saw across the industry. It resulted in stunted Scrum Masters who were not allowed to develop the skills needed to really help teams not only deliver, but deliver while improving team capabilities. The people on the ground need a full complement of skills because on the ground, with teams, day in and day out, is where the action is.



Quote for the day:


"Leadership happens at every level of the organization and no one can shirk from this responsibility." -- Jerry Junkins


Daily Tech Digest - November 26, 2018

The race to create a real-life Star Trek medical scanner


Basil Leaf Technologies is still working towards creating a Tricorder in the way that most people think of it: a single device that can diagnose a range of conditions. For a real-life Tricorder to serve as a universal diagnostic tool in the way that Star Trek envisioned, it would need to be able to analyse far more biomarkers than the DxtER currently does. Handily, scientists are also working on expanding the capabilities of Tricorder-like devices. Earlier this year, researchers from the University of Glasgow created a handheld sensor device based on a CMOS chip that can analyse a number of metabolites in blood or urine, analysing them to diagnose conditions including heart attacks. Elsewhere, companies are working on creating Tricorder type hardware with a focus on infectious disease: the Q-POC, made by QuantumDx, is expected to launch next year, and brings handheld diagnostics for bacterial and viral infections.


hyper convergence speed burning rubber tire binary fast by tao55 getty images
“The increasing use of hybrid cloud environments by enterprises also lines up nicely with the software-defined data center story, which HCI is certainly a large part of,” Lagana says. HCI has become a suitable platform for broader use due to a lot of the underlying improvements in the technology, Lagana says. At the same time, many enterprises have gone through an IT “refresh cycle” and HCI seems like a natural transition. “We’ve spoken with some HCI adopters and, in some cases, folks we’re talking to are upgrading multiple generation-old infrastructure running on old, sometimes now unsupported software,” Lagana says. “At that point, if the old server and/or storage technology they’re using is that far behind what’s now available, it becomes a matter of the level of complexity they’re seeking in their new environment.”


11 common wireless security risks you don't have to take


The best thing is to acknowledge your wireless ecosystem has security holes in it. This is even more likely when you have users connecting to random wireless hotspots at home, while traveling and so on. Even if you eliminate all the above vulnerabilities and implement WPA3, your business can be exposed to someone mimicking a legitimate AP -- the "evil twin" vulnerability, which has been around since the inception of Wi-Fi. Not only can an evil twin attack exploit network systems and information, but when it does happen you'll likely never know about it. The evil twin vulnerability can be mitigated using a wireless intrusion prevention system offered by many of the big networking vendors. Still, these systems won't protect your mobile users when they are out and about.



Regulator action will take time – six months is too early to get a proper read. Yet, we can still get a feel for what is going on by looking at what’s happening in a given country. The UK is interesting; their Information Commissioner predates GDPR as UKs privacy regulations go back to 1998. The UK commissioner is currently publishing findings and leveling fines after investigations for activities dating back to 2016. That gives us a feel for how long investigations may take under GDPR. Perhaps we will not know the full impact for another two years to the magnitude of fines levied. Facebook’s challenges with Cambridge Analytica were lucky in that they fell under the prior law resulting in a smaller 500K GDP fine than the billions allowed by GDPR. Breaches at British Airways and others, which took place since GDPR became active, are being carefully monitored to see if in fact they were properly reported to the UK commission within the 72-hour limit of being discovered.

When working on complex challenges, you’ll need to try doing new things (new offerings) and doing old things in new ways (new processes). But this risk-taking has to be prudent. At my firm, new team members must have the diligence and humility to learn the established way of handling a problem before they invent a new way. We try small experiments in safe contexts (tweaking established offerings and processes with trusting and trusted partners) before trying big experiments in dangerous contexts. In Mexico, for instance, although the work involved a unique situation and lots of trial and error, a foundation of decades of relevant experience enabled us to advance. You can improvise well only if you have practiced a lot. ... Often, you can’t rely only on your own perspective. Ask for feedback: from your colleagues, clients, and anyone else involved with the problem you’re trying to solve. Ask casually and formally, verbally and in writing, and with specific and open-ended questions.



8 strategies to keep legacy systems running


While most organizations are moving business operations to Software as a Service (SaaS) and cloud computing solutions, some organizations retain dependencies on legacy platforms and the software that runs on them. Maintaining access to these deprecated platforms can often be a source of frustration for IT, as aging hardware and software often requires scavenging websites such as Craigslist or eBay for decades-old parts. However, new parts and software can be used in its place, making the process easier. For software which requires the use of older operating systems, VirtualBox can readily be used to virtualize the OS and application, allowing the legacy environment to be used on modern hardware. VirtualBox has a built-in host for Remote Desktop Protocol (RDP), allowing users to connect remotely to a VirtualBox VM. VirtualBox is more adept at handling virtualization for legacy software than QEMU/KVM or other modern hypervisors.


Why Deep Defense Should Start with Detecting Compromised Credentials


In a worst-case scenario, the credentials for an admin account could grant access to an advanced threat actor – once they are in the environment they can move laterally, placing backdoors, RATs and other software to become persistent, and exfiltrate the data of employees or customers to resell or utilize for their own financial gain.  Though phishing and spear-phishing remain somewhat seminal techniques, particularly when combined with social engineering, malware use is often more efficient in terms of volume and timeliness than phishing. Though more complex skills are required for this tactic to be efficient, many malware families are openly sold -as-a-service – AgentTesla, for example is marketed between $6-15 per month, with customer support and updates available, bringing the barrier to entry down. Advanced attackers may use malware to infect machines and move laterally in an organization’s network.


New Linux crypto-miner steals your root password and disables your antivirus


The trojan itself is a giant shell script of over 1,000 lines of code. This script is the first file executed on an infected Linux system. The first thing this script does is to find a folder on disk to which it has write permissions so it can copy itself and later use to download other modules. Once the trojan has a foothold on the system it uses one of two privilege escalation exploits CVE-2016-5195 (also known as Dirty COW) and CVE-2013-2094 to get root permissions and have full access to the OS. The trojan then sets itself up as a local daemon, and even downloads the nohup utility to achieve this operation if the utility is not already present. After the trojan has a firm grasp on the infected host, it then moves on to executing its primary function for which it was designed for, which is cryptocurrency mining. The trojan first scans and terminates the processes of several rival cryptocurrency-mining malware families, and then downloads and starts its own Monero-mining operation.


How to tell which IoT predictions to pay attention to

2 networks smart city iot connected
Probably the main reason for the difficulty in predicting where the IoT market at large is going to go is that there's no general agreement on a precise definition of the boundaries of that market. Hence, the large number of large numbers purporting to describe the "size of the IoT market," which are frequently measuring very different aspects of it. “Everyone knows it’s going to be big,” said Alan Griffiths, principal consultant with market researcher Cambashi. “And no one’s got the faintest idea, in my opinion, of how big it’s going to be.” He talks to top technical people – CIOs and CTOs – for his estimates of the IIoT market, which gives him a better read on who’s buying what. Griffiths’ research on the IIoT market highlights another important point: IoT trend predictions focused on more specific market segments, or on particular technologies, tend to be a lot more digestible. The relevant details needed to create such an analysis are easier to get, and it’s more difficult to make guesswork look presentable.


Blockchain Implementations are still POCs


The problem is with good intention people in business and technology community who are still in the awe of the promises of Bitcoin. They are now hurting the cause and becoming the burden by forcing the one thought or one defined checklist for any Blockchain implementation. I think technology should be allowed to evolve organically and is not made the prisoner of the ‘original idea.’ I believe ‘Identifying the business problem you want to solve’, and will be the key to the success of any Blockchain implementation (rather than the phrase ‘Blockchain implementation’ it should rather be ‘Blockchain network setup’ and ‘application implementations’ on that setup). Eliminating intermediaries is the Utopian idea where one is asking to get into business transaction wherein one has to believe set of programmers of Blockchain platform rather than the entity that can be dragged in the court of law in the situations of dispute. 



Quote for the day:


"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner


Daily Tech Digest - November 25, 2018

Artificial intelligence: Germans see no reason to fear robot coworkers

One example of how AI can benefit people is automated driving. Bosch is striving to make road transportation emissions-free, accident-free, and stress-free. With nine out of ten accidents are currently attributable to human error, smart technology could use AI to prevent many of these from happening in the first place. Connected manufacturing is another banner field for AI. In a smart factory, people and machines will work together as an intelligent team. Robots will relieve people of strenuous and dangerous tasks and learn from experience. This will reduce people’s burden. The Bosch survey found that many Germans could imagine being able to accept this situation. Two-thirds of respondents – 67 percent – believe that manufacturing and mobility are going to benefit greatly from artificial intelligence. They are also open to working with a robot if it takes over routine chores. Half of all respondents could well imagine such a situation, and would above all devote the free time gained to social or creative activities.


Women in Blockchain: CryptoWendyO talks about her motivation

There’s so much negative energy directed at crypto from mainstream financial institutions because the public “doesn’t like change.” “Because crypto is intangible, it’s hard for the masses to understand. “We saw this with the internet and credit cards. If you notice, the group of folks present when credit cards became mainstream still write cheques – as time progresses, so will the masses.” The recent falls after the hard fork mean the market – which is basic supply and demand – needs a “catalyst to bring in new money.” WendyO says: “There’s nothing we can do individually to stop negative price action. What we can do is support one another and continue to support the entrepreneurs building in the space. They are the key to mass adoption. “Once Blockchain projects are seamless and make life easier for the masses, they will come.” Asked by me why people are panicking so much, she believes: “Price impacts the human psyche so much. People are entering into positions without proper risk management and education.



We all know how the media and the film industry are overhyping AI with androids and over-intelligent systems. Some computer pioneers, Alan Turing (you may want to watch The Imitation Game to appreciate the legend he is) at the forefront, did set off on projects with a view to making machines that think. Turing, however, did realise that this would be abysmally difficult, and in 1950 proposed: Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education, one would obtain the adult brain. This idea grew on to become Deep Learning. Fast forward to 2018: we have, and are still gathering, massive amounts of data. We have and are still developing more and more advanced algorithms. But do we have the hardware to crunch all those calculations within reasonable time? And if we do, can it be done without having all those GPUs cause another global warming on their own by literally heating up from all the processing?


Forget Robots, Blockchain Technology May Be the Real Threat to Your Job

Blockchain isn't just the technology behind the Bitcoin craze. It could also mean the end of the middle manager.
Traditionalists say this is a necessary component of an organization, freeing senior management to think strategically and move away from the day-to-day, while building a talent bench of the next generation of senior managers. Detractors ask what a middle manager actually adds to the bottom line, pointing to an unclear or difficult to define return on investment. The truth, as is often the case, lies somewhere in the middle. But it may not matter. Many organizations have clear, tangible, quantifiable key performance indicators for day-to-day functions, like sales closed or widgets shipped. With the advent of smart contracts on blockchain, it’s clear: robots aren’t the only ones gunning for your job. Blockchain technology is too. A smart contract is code designed to facilitate, verify or enforce performance of set terms. ... Notably, this is not a far-off concept—it’s something that, in many situations, could be implemented tomorrow.


Rebooting analytics leadership: Time to move beyond the math
CAOs often find themselves doing this heavy lifting with a limited sphere of influence. They typically do not have the profit-and-loss or revenue accountability that would grant them due power in the organization. Moreover, like chief marketing officers a decade ago, CAOs need—but typically lack—a true seat at the C-suite table, placing them at a disadvantage when trying to obtain adequate funding or resources to power the analytics agenda. ... Arguably, none of the previous CAO personas could succeed in today’s landscape. We’ve entered an era that requires a new CAO persona—the Catalyst—who embraces a style of leadership geared toward addressing the current demands, roadblocks, and scrutiny most companies face today when it comes to deploying AI and advanced analytics at scale. Catalysts approach their role very differently than did past CAO personas, in ways that those with more scientific and technical career backgrounds might not have ever done before.



How voice biometrics catches fraudsters


According to Costain, it is relatively easy for the system to identify a new voice. Often, a fraudster will phone in to check whether stolen credentials are valid, but in certain cases, the fraudster may scam the customer to obtain these credentials. “It’s a bit like epidemiology with Patient Zero,” he said. The same voice may try to access multiple accounts, which would signal an attempted fraud. RBS has also been compiling a database of evidence, which Costain said has led to a few police arrests of people who have made fraudulent calls. Over the next six months, the bank will have technology to enable customers to determine whether a call they receive from the bank is genuine, he said. Experian’s Global fraud report 2018 found that customers want to be recognised, while businesses want to address the growing fraud they are experiencing.


AI and Neuroscience: A virtuous circle


Another key challenge in contemporary AI research is known as transfer learning. To be able to deal effectively with novel situations, artificial agents need the ability to build on existing knowledge to make sensible decisions. Humans are already good at this - an individual who can drive a car, use a laptop or chair a meeting are usually able to cope even when confronted by an unfamiliar vehicle, operating system or social situation. Researchers are now starting to take the first steps towards understanding how this might be possible in artificial systems. For example, a new class of network architecture known as a “progressive network” can use knowledge learned in one video game to learn another. The same architecture has also been shown to transfer knowledge from a simulated robotic arm to a real-world arm, massively reducing the training time. Intriguingly, these networks bear some similarities to models of sequential task learning in humans. These tantalising links suggest that there are great opportunities for future AI research to learn from work in neuroscience.


6 ways to include dark data in analytic strategies

istock-876586498archive-files.jpg
The goal for CIOs is simple: Find out what data is under company management, but that it possibly didn't know that it had. Then, develop a strategic data plan with executives that addresses what do with this data so that it delivers its highest value to the company. ... As soon as it is determined that certain areas of data are useful, begin to digitalize and exploit it for value so you can get it working for you. ... Outside data sources can enhance the value of data you already have under management. A prime example is the monitoring of Greenland's ice pack. If you monitor climate change and are concerned about the pace of global warming, you can study historical photos of Greenland's land mass from decades ago. Comparison of Greenland against how it was decades ago to how it is today can demonstrate both the impact and progression of global warming. ... As paper-based forms of unstructured data are digitalized, it is essential for data to undergo quality assurance checks for data integrity and quality.


Generative Adversarial Networks (GANs) – The Basics You Need To Know

Horse.gif
So as name suggest it is called as Adversarial Networks because this is made up of two neural networks. Both neural networks are assigned different job role i.e. contesting with each other. Neural Network one is called as Generator, because it generate new data instances. Other neural net is called as Discriminator, evaluates work for first neural net for authenticity. The cycle continue to obtain accuracy or near perfection results. ... To understand “Generative Adversarial Networks”, its very important to differentiate between supervised learning and unsupervised learning. ... GAN’s are fairly new architecture in the deep learning domain. They fall under unsupervised neural network category. The performance measure is far better then traditional neural nets. When we use google search engine we use GANs at time of typing, what we want to search. 


Distributed Machine Learning Is The Answer To Scalability And Computation Requirements


It was this challenge to handle large-scale data due to scalability and efficiency of learning algorithms with respect to computational and memory resources that gave rise to distributed ML. For example, if the computational complexity of the algorithm outpaces the main memory then the algorithm will not scale well and will not be able to process the training data set or will not run due to memory restrictions. Distributed ML algorithms rose to handle very large data sets and develop efficient and scalable algorithms with regard to accuracy and to requirements of computation. Distributed ML algorithms are part of large-scale learning which has received considerable attention over the last few years, thanks to its ability to allocate learning process onto several workstations — distributed computing to scale up learning algorithms. It is these advances which make ML tasks on big data scalable, flexible and efficient.



Quote for the day:


"You can't just wish change; you have to live the change in order for it to become a reality." -- Steve Maraboli