Daily Tech Digest - June 08, 2019

Building to last: the industrial internet of things and sustainability

Building to last: the industrial internet of things and sustainability image
The potential for IoT solutions in industrial environments is huge, and sadly, it is also necessary. Because in contrast to the possibilities uncovered by the WEF, the Organisation for Economic Cooperation and Development, (OECD) has reported that a group of ‘mega trends’ are driving change throughout businesses and the world at large, trends that will influence policy, commodity prices, energy and even the availability of water and other essential resources. The OECD’s trends include the world’s population being forecast to grow by an additional 3 billion by 2050, almost half as much again as today’s population. So many people on the planet will place exponentially greater demand on agricultural land for cereal crops and animal products, while 55% more water will be required for social needs and 400% more water would be needed by manufacturing compared with the year 2000. On top of that, such an increase in population would also mean manufacturing’s demand for electricity would ramp up by 140%.



The Next Steps for International Cooperation in Fintech

A significant disruption to the financial landscape is likely to come from the big tech firms, who will use their enormous customer bases and deep pockets to offer financial products based on big data and artificial intelligence. These developments hold out the promise of accelerating inclusion and modernizing financial markets, but raise, in addition to privacy issues, competition and market concentration concerns, both of which could lead to vulnerabilities in the financial system. China’s technology industry is a prime example of this trade-off between benefits and challenges. Over the last five years, technology growth in China has been extremely successful and allowed millions of new entrants to benefit from access to financial products and the creation of high-quality jobs. But it has also led to two firms controlling more than 90% of the mobile payments market. This presents a unique systemic challenge to financial stability and efficiency, and one I hope we can touch on during the G20, and address in a cooperative and consistent fashion


Why We Need a People-first AI Strategy


We need to have what I term as a “people-first” AI strategy. We have to use technology, not because technology exists, but because it helps us to become better individuals. When organizations deploy AI inside their work processes or systems, we have to explicitly focus on putting people first. This could mean a number of things. There will be some instances of jobs getting automated, so we have to make sure that we provide adequate support for re-skilling, for helping people transition across jobs, and making sure they don’t lose their livelihoods. That’s a very important basic condition. But more importantly, AI provides tools for predicting outcomes of various kinds, but the actual implementation is a combination of the outcome prediction plus judgment about the outcome prediction. The judgment component should largely be a human decision. We have to design processes and organizations such that this combination of people and AI lets people be in charge as much as possible. There has to be a human agency-first kind of principle that lets people feel empowered about how to make decisions, how to use AI systems to make better decisions.



5 Ways Big Data Can Vitalize Healthcare

There is a unique challenge that third-world countries, especially those in the Asian continent are facing. That is the quadrupling of ageing population. The United Nations Population Division reports that the population of elderly (aged 65 and above) has increased four times the ageing population of 1900s. Accompanied by a decline in birth rates, this means there are more elderly who need healthcare support than ever before. Time, distance and talent shortage makes it difficult to attend to these elderly on a regular basis. It is here that remote patient monitoring systems pitch in with a helping hand. Enabled with advanced technologies like the Internet of Things and telemedicine, healthcare professionals are now able to reach out to remotely located elderly easily. Singapore’s Elderly Management System (EMS) is a classic example of such data-driven health care initiatives. ... Forecasting healthcare requirements on a regional level cannot be done with the siloed information. Doctors, hospitals and administration need a combined view of the population demographics, the health challenges that they face and the bottlenecks that need to be resolved to improve health care.


When AI Becomes an Everyday Technology


At its core, AI is about automating judgments that have previously been the exclusive domains of humans. This is a significant challenge unto itself, of course, but it brings with it significant risk as well. Increasing effort, for instance, is required to make the decisions of AI systems more transparent and understandable in human terms. Additionally, best practices are emerging on how to use data sets and testing to ensure each sub-population of users is treated with fairness and consistency. There are also adversarial examples — deliberately misleading input intended to cause an AI system to misbehave — as well as deepfakes — realistically modified video — among many other emerging challenges. As leaders in AI, it’s our responsibility to face all of these complexities, and provide the expertise our customers and their users need to steer this technology in the right direction. ... Sooner or later, every technology transitions from an elite niche to a mainstream tool. AI is now undergoing a similar transformation. After years of hype around mysterious neural networks and the PhD researchers who design them, we’re entering an age in which just about anyone can leverage the power of intelligent algorithms to solve the problems that matter to them.


Kaminario Drives Composable Storage 2.0

The combination of Kaminario VisionOS and Flex software enables customers to build racks of commodity, off-the-shelf compute and solid-state drives (SSDs), and to then compose those resources into logical storage resources that are tailored to specific requirements. For example, to serve high performance workloads, the configuration can be CPU-heavy to drive controller functionality with a smaller amount of SSDs. More capacity-oriented workloads may require just two controllers but larger amounts of high-capacity flash. These most basic configuration of these storage resources include two active-active storage controllers and storage capacity in the form of SSDs, and Fibre Channel (FC) or iSCSI connectivity. These storage resources can be scaled as needed, adding one or more storage controllers for additional performance, or one or more JBOF shelves for additional capacity. This enables creation of scale-out clusters which complement the ability of each node to scale up with additional flexibility.


How to fail at digital transformation: 3 pitfalls to avoid at all costs


One of the biggest mistakes companies make is placing too much emphasis on the digital portion of digital transformation, said Solis. In other words, companies shouldn't take on digital initiatives for the sake of the trend. Companies will fail if they follow "Whatever the flavor of the month is, when you look at getting that technology because it's hot, and it's what everybody is doing," said Solis. "If you don't give it a sense of purpose, then that technology is going to be finite in its value to the organization." ... When teams get overwhelmed with too many digital transformation initiatives, their communication will begin to crumble, and the projects will follow suit, Hennessy noted. Organizations must make sure everyone in the company is aware and up to speed on the digital changes occuring. "If the communication is consistent and solid about the 'why' for the change, that can help the organization be ready for it," Hennessy said. "Then you make the change, with a lot of communication around it, reinforce the change over time, and reframe into this new state. ..."


What does an Enterprise Architect look like?

To clarify, the key job of the Enterprise Architect is to deliver the enterprise model, the blueprint. Because today, each Enterprise Architect comes with its own definition and preferred framework, to avoid conflicting EA developments and directions, the enterprise should employ only one Lead Enterprise Architect. The Enterprise Architect has to establish the EA framework, principles, methods and tools, coordinates the team, and plans the work. The role ultimately has to harmonize and coordinate the development of all business units architectures to make sure they fit coherently in the whole. If the enterprise is large enough though, for each business unit there should be an EA architect who reports, in architectural matters, to the Chief EA so that all outcomes are compatible. The Solution Architects may or may not report to the Chief Architect depending on the IT organization. Likewise, the Information Architects and Process Analysts may not report to the EA Architect.
Digital M&A in an era of increased regulatory scrutiny image
Alongside the surge in interest in data and tech acquisitions, international regulatory bodies are stepping up their levels of scrutiny to ensure that M&A activity is fair and doesn’t compromise national security or public interest – and this is inevitably making acquisitions of digital assets more complex. While our report found that between 2009-2017, less than one per cent of deals were withdrawn without completion as result of regulatory intervention, the number of deals that are being investigated – and which take longer to complete as a result – is much bigger. The cause for concern is that regulatory bodies are looking to increase their scrutiny and therefore the number of deals expected to be investigated looks set to increase dramatically, which will impact companies from across multiple sectors that are buying or selling data assets. There are also questions around how the traditional review standards and tools can, and should, be applied to data deals. So why the increased scrutiny by authorities?


How a new wave of accessibility tech is bringing benefits to all


Data released by Ofcom earlier this year revealed that people with disabilities are being left behind or are simply not using modern tech to the same degree as the rest of the population. The report states that only 53% of people with disabilities have a smartphone in their household, compared with 81% of non-disabled people. Ofcom also noted that 67% of people with disabilities use the internet, compared with 92% of non-disabled people. The consequences of improving this situation are profound. According to disability charity Scope, if a million more people with disabilities could work, the UK economy alone would grow by 1.7%, or £45bn. This is a fact Microsoft is aware of and working to change. In April this year it reached the highest level of the government’s disability employment scheme and became a Disability Confident Leader. This is a status it shares with technology resell partner John Lewis, which achieved its leadership credentials for disability employment in February this year.



Quote for the day:


"Good leaders must first become good servants." -- Robert Greenleaf


Daily Tech Digest - June 07, 2019

Autonomous versus automated: What each means and why it matters

Futuristic technology of self-driving car.
Automated systems work best in well-defined environments with clear functions to perform. These systems can be built efficiently, and operate much faster than a human. One area, specific to security, that comes to mind is in validating an infrastructure template. As infrastructure increasingly becomes software defined, a CI/CD like process is needed to validate the configurations. This can be viewed as a pre-deployment compliance check to make sure the infrastructure is provisioned correctly and that human errors are caught. Autonomous systems are most effective in an ever-evolving landscape such as new attack vectors and increased attack surfaces. These systems need access to datasets from which to learn from and new algorithms to analyze the data differently as the AI space matures. These systems come at a cost, however, as many are heavily focused on R&D with increasing investments made over time. Due to the increased cost and complexity, these systems are overkill for solving solutions that are just as easily addressed by automation based systems. Over time, autonomous systems will require less training data, and the complexity is already being reduced by a combination of open source projects and cloud provider offerings, but they will continue to be more complex and expensive relative to automated systems.



Making the most of micro-moments with Dr. Shamsi Iqbal

The word “distraction” has a negative connotation to it and I want to look at it differently because sometimes you do need to step away from work and you do need to take breaks and you do need to just refresh your perspectives and I believe that that actually makes you more productive in the long run. So, I think that the problem is deeper here. So, we need to take breaks. We need to do other stuff, but we have difficulties in prioritizing what is important for us, what we need to get done, what moves us forward in the responsibilities that we have. And we often get lost. And I think that’s where technology can help us. I mean, if I’m not able to help myself because I am just distractible and when I go down that rat hole of distractions, then maybe yes, I do need something that pulls me back out. And so, that’s how we’re coming at this problem because I personally don’t feel that if you take a break and you go and chat with a colleague about mundane things, or if I go on Facebook or Twitter, unless I’m spending hours on it, I don’t see that to be a problem.


Legacy IT systems a significant security challenge


As legacy IT systems age, said Ford, the security risks increase, compounded by the fact that many of these systems are critical to the business and often cannot be decommissioned or replaced because of high costs, complexity or lack of suitable alternatives. “Legacy IT systems are often at the heart of cyber breach incidents, and because decommissioning is not usually an option, information security professionals need to manage the risk by working closely with key business stakeholders to identify all critical systems and the systems that support them,” he said.  The next step, said Ford, is to understand which are the most critical systems. “The role of security professionals is to assess the likelihood and potential impact of a cyber attack, while the role of business [professionals] is to identify what systems and processes are the most critical,” he said. Once security professionals understand what systems are critical, Ford said they would be able to prioritise and plan which ones to update and patch to make them secure. “This should be the objective of all information security professionals as business risk managers.”


Instagram's ecommerce move reveals retailers need blockchain to keep up

Instagram's ecommerce move reveals retailers need blockchain to keep up
Believe it or not, many of the very retailers who promote themselves on Instagram as the latest viral craze still use pen and paper for their internal logistics systems. The reason is simple: instead of modernizing to keep up with consumer trends and technological advancements, suppliers tend to stick with what they know. This results in disastrous outcomes for consumers who purchases get lost in shipping frenzies, particularly around the holidays. For example, in 2014, the U.S. Postal Service reported that about 88 million undeliverable items were directed to the USPS Mail Recovery Center in Atlanta, Georgia. Of those tens of millions of items, only about 3% ended up in the correct customer's hands – the rest either got destroyed, donated or auctioned off. The most frustrating part of this current cycle of mismanagement is that real solutions already exist to help companies improve successful rates of delivery. By incorporating blockchain technology into the shipment process, retailers can create a fully integrated and streamlined system across their entire supply chain.


Cloud Hadoop Competition Hits MapR, Cloudera

Image: echiechi - stock.adobe.com
"MapR has formidable competition on premises from a much larger Cloudera now, and faces increased pressure from cloud providers offering their own Hadoop-based solutions. Their proprietary versions of open source components now appear more risky as a result, and lead to more questions about their suitability for long term plans," Adrian told InformationWeek this week. "Gartner has talked to a number of concerned [MapR] customers, some quite large, who believe in the technology, and some made additional investments during the past year, but the outlook is not encouraging." Among the company's missteps was the transition from direct sales to an indirect model, which is tricky when you are dealing with complicated technology sales to enterprise-sized companies, according to Adrian. In spite of its own difficulties, Cloudera may be positioned to take advantage of MapR's troubles. Cloudera's Reilly said that the merger with Hortonworks has enabled the company to get more resources and scale to develop cloud architecture to "quickly re-platform our business. MapR could not get the resources or scale. Their customer base is an opportunity for us and part of our growing pipeline."


Juniper: Security could help drive interest in SDN


Juniper’s study found that 87 percent of businesses are still doing most or some of their network management at the device level. What all of this shows is that customers are obviously interested in SDN but are still grappling with the best ways to get there, Bushong said. The Juniper study also found users interested in SDN because of the potential for a security boost.  SDN can empowers a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low-security network that does not touch any sensitive information. Another segment could have much more fine-grained remote-access control with software-based firewall and encryption policies on it, which allow sensitive data to traverse over it. SDN users can roll out security policies across the network from the data center to the edge much more rapidly than traditional network environments. 


img-20190604-184848.jpg
"This method was inherently biased," he said, and "failed to captured niche interests like mushroom picking." That led to the creation of Amazon's first recommendation engine. Wilke outlined the technical details of the matrix-based completion methods that Amazon tested, which eventually led to its first commercial deep learning model. Throughout, "We didn't sequester our scientists," he said. Instead, data scientists were integrated into teams focused on the product and customer experience. "They start with the customer experience, not the machine learning algorithm," he said. Similarly, for the development of its in-store shopping experience, Amazon Go VP Dilip Kumar stressed, "If you start with a genuine customer problem, you can use the power of machine learning... to build a stellar customer experience." To create the concept of the Amazon Go store -- "take what you want and just go," according to Kumar -- Amazon had to choose technologies to eliminate the checkout process. It settled on computer vision. The first problem to solve, he said, was identifying the customer account and their precise location in the store. Amazon utilized geometry and deep learning to not just predict customer account location but accurately associate interactions to the right customer account.


Nearly two-thirds of businesses hit by credential abuse


“Both internal employees and third-party vendors need privileged access to be able to do their jobs effectively, but need this access granted in a way that doesn’t compromise security or impede productivity,” said Morey Haber, CTO and CISO of BeyondTrust. “In the face of growing threats, there has never been a greater need to implement organisation-wide strategies and systems to manage and control privileged access in a way that fits the needs of the user.” Globally, the businesses surveyed reported an average of 182 third-party suppliers logging in to their systems every week. In UK organisations, 46% said they have more than 100 suppliers logging in regularly, underlining the scope of risk exposure. The UK data shows that businesses still tend to be too trusting, with 83% admitting they trust third-party suppliers accessing their networks, slightly up from last year’s report. However, trust in employee privileged access was cited at 87%, down from 91% a year ago.


Cloud adoption drives the evolution of application delivery controllers

Cloud adoption drives the evolution of application delivery controllers
This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility.  The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT can’t operate effectively without automation, and technology professionals are starting to understand that. The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors.


Why Should We Care About Technology Ethics? The Updated ACM Code of Ethics

The original purpose of business is to serve society. If you don't serve society it’s less likely that someone will buy your product. And these days there's a been huge push from society towards requiring more ethical business practices. We've also seen pushback from employees within several well-known large companies when it comes to ethical issues, so there’s internal as well as external push for more ethical technologies. We're seeing these sorts of demands for more environmental considerations, more sustainability considerations, and more concern for the societal impact of technologies, too. People are worried about their data, they're worried about their privacy, they're worried about their kids, they're worried about all kinds of ethical issues that impact them. The fact that a lot of these companies have been able to operate in a relatively grey area for so long has meant that we've actually seen where these cases can go. There's now demand for governments to regulate more heavily, as can be seen with the GDPR.



Quote for the day:


"Leadership, on the other hand, is about creating change you believe in." -- Seth Godin


Daily Tech Digest - June 06, 2019

Cisco will use AI/ML to boost intent-based networking

ai vendor relationship management bar code purple artificial intelligence hand on virtual screen
“By applying machine learning and related machine reasoning, assurance can also sift through the massive amount of data related to such a global event to correctly identify if there are any problems arising. We can then get solutions to these issues – and even automatically apply solutions – more quickly and more reliably than before,” Apostolopoulos said. In this case, assurance could identify that the use of WAN bandwidth to certain sites is increasing at a rate that will saturate the network paths and could proactively reroute some of the WAN flows through alternative paths to prevent congestion from occurring, Apostolopoulos wrote.  “In prior systems, this problem would typically only be recognized after the bandwidth bottleneck occurred and users experienced a drop in call quality or even lost their connection to the meeting. It would be challenging or impossible to identify the issue in real time, much less to fix it before it distracted from the experience of the meeting. Accurate and fast identification through ML and MR coupled with intelligent automation through the feedback loop is key to successful outcome.”



DevOps security best practices span code creation to compliance


As software development velocity increases with the adoption of continuous approaches, such as Agile and DevOps, traditional security measures struggle to keep pace. DevOps enables quicker software creation and deployment, but flaws and vulnerabilities proliferate much faster. As a result, organizations must systematically change their approaches to integrate security throughout the DevOps pipeline. ... Software security often starts with the codebase. Developers grapple with countless oversights and vulnerabilities, including buffer overflows; authorization bypasses, such as not requiring passwords for critical functions; overlooked hardware vulnerabilities, such as Spectre and Meltdown; and ignored network vulnerabilities, such as OS command or SQL injection. The emergence of APIs for software integration and extensibility opens the door to security vulnerabilities, such as lax authentication and data loss from unencrypted data sniffing. Developers' responsibilities increasingly include security awareness: They must use security best practices to write hardened code from the start and spot potential security weaknesses in others' code.


Reinforcement learning explained

Reinforcement learning explained
The environment may have many state variables. The agent performs actions according to a policy, which may change the state of the environment. The environment or the training algorithm can send the agent rewards or penalties to implement the reinforcement. These may modify the policy, which constitutes learning. For background, this is the scenario explored in the early 1950s by Richard Bellman, who developed dynamic programming to solve optimal control and Markov decision process problems. Dynamic programming is at the heart of many important algorithms for a variety of applications, and the Bellman equation is very much part of reinforcement learning. A reward signifies what is good immediately. A value, on the other hand, specifies what is good in the long run. In general, the value of a state is the expected sum of future rewards. Action choices—policies—need to be computed on the basis of long-term values, not immediate rewards. Effective policies for reinforcement learning need to balance greed or exploitation—going for the action that the current policy thinks will have the highest value


The Linux desktop's last, best shot


Closer to home in the West, companies are turning to Linux for their engineering and developer desktops. Mark Shuttleworth, founder of Ubuntu Linux and its corporate parent Canonical, recently told me: "We have seen companies signing up for Linux desktop support because they want to have fleets of Ubuntu desktop for their artificial intelligence engineers." Even Microsoft has figured out that advanced development work requires Linux. That's why Windows Subsystem for Linux (WSL) has become a default part of Windows 10.  So, the opportunity is there for Linux to grab some significant market share. My question is: "Is anyone ready to take advantage of this opportunity?" All the major Linux companies -- Canonical, Red Hat and SUSE -- support Linux desktops, though it's not a big part of their businesses. The groups which do focus on the desktop, such as Mint, MX Linux, Manjaro Linux, and elementary OS, are small and under-financed. So I can't see them delivering the support most users -- nevermind governments and companies -- need. 


DNS – a security opportunity not to be overlooked, says Nominet


“We are seeing a lot more breaches, and with many businesses embracing digital transformation, the attack surface is getting wider. But in many cases, having an understanding of what is going on in the DNS layer can reduce the impact of breaches and even prevent them,” said Reed. “DNS has an important role to play because it underpins the network activity of all organisations. And because around 90% of malware uses DNS to cause harm, DNS potentially provides visibility of malware before it does so.” In addition to providing organisations with an opportunity to intercept malware before it contacts its command and control infrastructure, DNS visibility enables organisations to see other indictors of compromise such as spikes in IP traffic and DNS hijacking. “Being able to track and monitor DNS activity is important as it enables organisations to identify phishing campaigns and the associated leakage of data. It also enables them to reduce the time attackers are in the network and spot new domains being spun up for malicious activity and data exfiltration,” said Reed.


The Sustainability Revolution Hits Retail

retail
Technology is paramount to building a truly sustainable business. Retailers are already applying advanced data analytics to supply chains to make the most of resources and reduce waste, which has a knock on effect in terms of sustainability. The Industrial Internet of Things (IIoT) will continue to improve operational efficiency across different organisations, cutting down on energy and expenditure. Despite debate over the sustainability of blockchain, distributed ledger technology could bring about the transparency that could kill environmentally or socially questionable products. Blockchain could provide visibility across the entire supply chain, so buyers know exactly where it came from, and how it was made. Richline Group, for example, is already using blockchain to ensure that its diamonds are ethically sourced. Materials science also has an important role in finding new materials that are cheaper and lower maintenance than existing alternatives. 3D printing is key to working with new materials, creating rapid prototypes for testing. The adoption of innovative manufacturing techniques like 3D printing and advanced robotics is hoped to make supply chains more efficient.


Blazor on the Server: The Good and the Unfortunate


If you're wondering what the difference is between Blazor and BotS ... well, from "the code on the ground" point of view, not much. It's pretty much impossible, just by looking at the code in a page, to tell whether you're working with Blazor-on-the-Client or Blazor-on-the-Server. The primary difference between the two -- where your C# code executes -- is hidden from you. With BotS, SignalR automatically connects activities in the browser with your C# code executing on the server. That SignalR support obviously makes Blazor solutions less scalable than other Web technologies because of SignalR's need to maintain WebSocket connections between the client and the server. However, that scalability issue may not be as much of a limitation as you might think. What BotS does do, however, is "normalize" a lot of the ad hoc ways that have been needed when working Blazor in previous releases. BotS components are, for example, just another part of an ASP.NET Core project and play well beside other ASP.NET Core technologies like Razor Pages, View Components and good old Controllers+Views. 


Self-learning sensor chips won’t need networks

Self-learning sensor chips won̢۪t need networks
Key to Fraunhofer IMS’s Artificial Intelligence for Embedded Systems (AIfES) is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,” It’s “decentralized AI,” says Fraunhofer IMS. "It’s not focused towards big-data processing.” Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved. “It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says. Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.


New RCE vulnerability impacts nearly half of the internet's email servers

email-pam.png
In a security alert shared with ZDNet earlier today, Qualys, a cyber-security firm specialized in cloud security and compliance, said it found a very dangerous vulnerability in Exim installations running versions 4.87 to 4.91. The vulnerability is described as a remote command execution -- different, but just as dangerous as a remote code execution flaw -- that lets a local or remote attacker run commands on the Exim server as root. Qualys said the vulnerability can be exploited instantly by a local attacker that has a presence on an email server, even with a low-privileged account. But the real danger comes from remote hackers exploiting the vulnerability, who can scan the internet for vulnerable servers, and take over systems. "To remotely exploit this vulnerability in the default configuration, an attacker must keep a connection to the vulnerable server open for 7 days (by transmitting one byte every few minutes)," researchers said. "However, because of the extreme complexity of Exim's code, we cannot guarantee that this exploitation method is unique; faster methods may exist."


What is CI/CD? Continuous integration and continuous delivery explained

Continuous integration is a development philosophy backed by process mechanics and some automation. When practicing CI, developers commit their code into the version control repository frequently and most teams have a minimal standard of committing code at least daily. The rationale behind this is that it’s easier to identify defects and other software quality issues on smaller code differentials rather than larger ones developed over extensive period of times. In addition, when developers work on shorter commit cycles, it is less likely for multiple developers to be editing the same code and requiring a merge when committing. Teams implementing continuous integration often start with version control configuration and practice definitions. Even though checking in code is done frequently, features and fixes are implemented on both short and longer time frames. Development teams practicing continuous integration use different techniques to control what features and code is ready for production.



Quote for the day:


"When building a team, I always search first for people who love to win. If I can't find any of those, I look for people who hate to lose." - H. Ross Perot


Daily Tech Digest - June 05, 2019

The Internet of Things enables a floating city of pleasure... and a vision of hell

shipiotcity.jpg
Every passenger and all the ship's staff carry a wireless Bluetooth and NFC-enabled medallion about the size of a fat 25-cent coin. Through a massive network of sensors and edge computing devices, the medallion controls the opening of cabin doors, ordering drinks, delivery of services, and in emergencies it ensures no one is missed. Facial recognition is used to identify passengers as they come on board. And their location is known at all times to the ship's captain through a large dashboard that also shows the exact location of each of the ship's workers. This location information is used in many ways -- like by cleaning staff to service a cabin when they notice it is empty. Previously, they had to rely on knocking or other signs of vacancy. It's also used to deliver drinks and food directly to the passenger. And the medallion automatically unlocks the cabin door before the passenger reaches it. Drinks and food are automatically charged to the passenger's account, and alcohol consumption is not monitored or flagged if excessive. The medallion is also used for funds in the ship's casino.



4 reasons why Agile works and the most common excuse when it doesn’t

This is clearly linked to the self-determination because when teams are setting their own deadlines there is automatically an increased level of confidence in the outcome and confidence is a critical component of success. In my experience teams are not afraid of hard work they are afraid of failure. And when you look at the stats around failure, one study showed that on projects that failed, 75% of the time, the teams involved knew it was going to fail on day 1. Now this lack of confidence can become a self-fulfilling prophecy, but by the same thinking, a belief that the project will be successful can also become self-fulfilling. When teams believe a project will fail, when it starts to fail they go into I told you so mode. However when they believe a project will succeed and it starts to fail they go into solution mode. Looking to find out what’s caused the issues and try to resolve them. Teams in solution mode will always outperform teams in I told you so mode.


Cloud computing and regulation: Following the eye of the storm

cloud
Out of the rapid growth of cloud computing technologies, we are starting to see a shift in how the law and regulation keep up. A major question mark looming over the sector is its lack of standardized guidance. Cloud computing is not governed by a specific “cloud law,” and no direct regulation applies to its services. Instead, the legal and regulatory landscape is made up of a matrix of different rules, as wide as the scope of the technology itself, spanning multiple industries and geographies. Given this breadth, there has been a gradual shift from legislative solutions to industry standardization as a means of closing the gap between regulation and the eye of the technological innovation storm. Whilst there is no direct legislation, some UK regulators, most notably in the financial services sector, have in recent years published guidance on the use of cloud technologies. This guidance focuses on how the technology can be used in compliance with existing regulatory rules, and whilst it has not set out a step-by-step process for deploying cloud technologies in compliance with regulatory requirements, it has shown that the regulators consider that there is no fundamental reason why firms cannot use cloud services in a regulatory compliant manner.


What is the cloud: beyond infrastructure as a service

Cloud adoption has grown rapidly, and today we find that almost all companies are using some form of cloud. However, research estimates that only approximately 20% of an enterprise's applications are in the cloud today. We are now entering chapter two, where we will focus on getting the next 80% of workloads — the mission-critical ones — to the cloud to optimize everything from supply chains to sales transactions. As we enter this next chapter, the definition of cloud is expanding and companies are now viewing it as an opportunity to incorporate existing IT and private cloud environments with new public cloud capabilities like AI and analytics completely underpinned by security. Moreover, they need to be able to easily choose where to deploy their workloads across all of these environments, which requires a commitment to open source technology and increased automation and management. This is a hybrid cloud approach, and this strategy is helping companies find new ways to solve age-old challenges, launch brand new business services, completely transform user and employee experiences, and much more.


Providing Drivers a Safety Net with Computer Vision


Synthetic Data is fast becoming an essential component of autonomous driving and computer vision AI systems. By bringing together techniques from the movie and gaming industries (simulation, CGI) together with emerging generative neural networks (GANs, VAE’s), we are now able to engineer perfectly-labeled, realistic datasets and simulated environments at scale. There is virtually no incremental cost of additional generated images and since the Synthetic Data is created all the attributes are known to pixel-perfect precision. Key labels such as depth, 3D position and partially obstructed objects are all provided by design. Application of this technology could allow important safety features to be brought to market quickly and cost-effectively, from crash prevention software to predictive maintenance, onboard diagnostics, and location insights.  Synthetic Data is a cost-effective solution that cuts down on the time and effort needed to acquire, clean and organize driver data. 


Data Uncertainty In The Time Of Brexit: How Business Can Protect Their Data

With so much noise surrounding Brexit and the constant changing circumstances and deadlines, it can be easy for businesses to bury their heads in the sand and wait for the dust to settle. However, it is crucial for businesses to take proactive measures to ensure their data processes stand the test of time. If they don’t act now, they will be left behind by quicker, more agile businesses. Data is the new currency for any business, and not being able to have an easy flow of data from the EU will seriously impact British business. Without the free flow of data to inform customer insights, markets trends, and competitor analysis, the revenue streams of UK businesses will be seriously impacted as delays in data governance, management and usage will put these businesses at a serious competitive disadvantage. With political decisions continuing to fluctuate, organisations need to be prepared. The outcome of good preparation should be the agility that enables organisational resilience in the face of disruption to international data flows.


Phishing attacks that bypass 2-factor authentication are now easier to execute  

CSO > Phishing attacks that bypass two-factor authentication
To overcome 2FA, attackers need to have their phishing websites function as proxies, forwarding requests on victims' behalf to the legitimate websites and delivering back responses in real time. The final goal is not to obtain only usernames and passwords, but active session tokens known as session cookies that the real websites associate with logged-in accounts. These session cookies can be placed inside a browser to access the accounts they're associated with directly without the need to authenticate. This proxy-based technique is not new and has been known for a long time, but setting up such an attack required technical knowledge and involved configuring multiple independent tools such as the NGINX web server to run as reverse-proxy. Then the attacker needed to manually abuse the stolen session cookies before they expire. Furthermore, some websites use technologies like Subresource Integrity (SRI) and Content Security Policy (CSP) to prevent proxying, and some even block automated browsers based on headers.


On the Frontier of an Evolving IT Workscape: What's Ahead for IT Work

People involved in the buying and selling of IT skills are skeptical that the talent pool emerging from four-year universities, business schools and community colleges will provide the skills that enterprises need to prosper. Only 16% of our respondents in large enterprises and 20% of those in midmarket enterprises believe they'll find the necessary skills from these graduates. And only a third of Habitat respondents in large enterprises and half of those in midmarket organizations believe that paying staff well will enable them to acquire the necessary IT expertise. Damien Bean, a former corporate IT vice president at Hilton Hotels Corp. and founder of CareerCurrency LLC, envisions service providers, not educators, playing an expanded role in getting IT work done. "My hypothesis is that the bottom half of the entire portfolio will move to a service model in the next 10 years," he says. "The hidden parts of this equation are demographics and outsourcing. A lot of the newest and most challenging projects are being built partly or solely offshore."


Network monitoring in the hybrid cloud/multi-cloud era

clouds binoculars network monitoring future it looking horizon vision
Most newer vendors will have a good API, he adds. Older ones might be slower to open up APIs to customers because they consider the data they produce with their analytics to be proprietary. “Infrastructure teams may have an advantage with some of the legacy tools that they currently have that are expanding into cloud-native environments,” Laliberte says. Tool sets like Riverbed, which integrates SNMP polling, flow and packet capture to get an enterprise network view of performance in hybrid cloud environments, and SolarWinds advanced network monitoring for on-premises, hybrid, and cloud, “give the opportunity to tie in both the legacy and cloud” monitoring, he adds. ... Whether we call it hybrid, cloud or SD networking, the future of networking is software defined – with distributed rather than centralized intelligence or control,” Siegfried says. “The same automation philosophy, infrastructure and code techniques that have disrupted other areas of infrastructure management are applying to networking as well.


Surviving and thriving in year three as a chief data officer

Data and analytics projects can be classified as either defense or offense (in the immortal words of Tom Davenport). Data defense seeks to resolve issues, improve efficiency or mitigate risks. Data quality, security, privacy, governance, compliance – these are all critically important endeavors, but they are often viewed as tactical, not strategic. The only time that data defense is discussed at the C level is when something goes wrong. Data offense expands top line revenue, builds the brand, grows the company and in general puts points on the board. Using data analytics to help marketing and sales is data offense. Companies may acknowledge the importance of defense, but they care passionately about offense and focus on it daily. The challenge for a CDO or CAO is that data defense is hard. A company’s shortcomings in governance, security, privacy, or compliance may be glaringly obvious. In some cases, new regulations like GDPR scream for attention. Data defense has a way of consuming more than its fair share of the attention and staff.



Quote for the day:


"Dont be afraid to stand for what you believe in, even if that means standing alone." -- Unknown


Daily Tech Digest - June 04, 2019

What the Future of Fintech Looks Like

What the Future of Fintech Looks Like
Fintech has been driving huge changes across the financial services sector, but one area that is seeing exponential change is in the ultra-high net-worth individual (UHNWI) space. Crealogix Group, a global market leader in digital banking, has been working with banks across the world on their digital transformation journey for over 20 years, and it is only recently that they are seeing growing momentum in private wealth to digitize. Pascal Wengi, the AsiaPacific managing director of Crealogix, says: “The old ways of servicing these clients through a personal touch is quickly moving to digitally-led platforms, with younger, techsavvy UHNWIs wanting an immediate and comprehensive view of their assets without waiting for a phone call. At the same time, they also want customized solutions catered to their unique financial needs.” Platforms that allow access on both sides—clients, and their advisors, family office teams and accountants..., insists Wengi.



data gravity 1000x630
Data gravity is a metaphor introduced into the IT lexicon by a software engineer named Dave McCrory in a 2010 blog post.1 The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the Law of Gravity. In the current Enterprise Data Analytics context, as datasets grow larger and larger, they become harder and harder to move. So, the data stays put. It’s the gravity — and other things that are attracted to the data, like applications and processing power — that moves to where the data resides. Digital transformation within enterprises — including IT transformation, mobile devices and Internet of things — is creating enormous volumes of data that are all but unmanageable with conventional approaches to analytics. Typically, data analytics platforms and applications live in their own hardware + software stacks, and the data they use resides in direct-attached storage (DAS). Analytics platforms — such as Splunk, Hadoop and TensorFlow — like to own the data. So, data migration becomes a precursor to running analytics.


5 requirements for success with DataOps strategies

For organization who operate at this speed of change, they require modern data architectures that allow for the quick use of the ever-expanding volumes of data. These infrastructures – based on hybrid and multi-cloud for greater efficiency – provide enterprises with the agility they need to compete more effectively, improve customer satisfaction and increase operational efficiencies. When the DataOps methodology is part of these architectures, companies are empowered to support real-time data analytics and collaborative data management approaches while easing the many frustrations associated with access to analytics-ready data. DataOps is a verb not a noun, it is something you do, not something you buy. It is a discipline that involves people, processes and enabling technology. However, as organizations shift to modern analytics and data management platforms in the cloud, you should also take a hard look at your legacy integration technology to make sure that it can support the key DataOps principles that will accelerate time to insight.



An API architect typically performs a high-level project management role within a software development team or organization. Their responsibilities can be extensive and diverse, and a good API architect must combine advanced technical skills with business knowledge and a focus on communication and collaboration. There are often simultaneous API projects, and the API architect must direct the entire portfolio. API architects are planners more than coders. They create and maintain technology roadmaps that align with business needs. For example, an API architect should establish a reference architecture for the organization's service offerings, outlining each one and describing how they work. The architect should define the API's features, as well as its expected security setup, scalability and monetization. The API architect sets best practices, standards and metrics for API use, as well. These guidelines should evolve as mistakes become clear and better options emerge.



Edge-based caching and blockchain-nodes speed up data transmission

Edge-based caching and blockchain-nodes speed up data transmission
Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.


Microsoft's Vision For Decentralized Identity

Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in breaches, affecting our social, professional, and financial lives. Microsoft believes that there’s a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This whitepaper explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations. Today we use our digital identity at work, at home, and across every app, service, and device we engage with. It’s made up of everything we say, do, and experience in our lives—purchasing tickets for an event, checking into a hotel, or even ordering lunch. 


Your 3-minute guide to serverless success
What has propelled the use of serverless? Faster deployment, the simplification and automation of cloudops (also known as “no ops” and “some ops”), integration with emerging devops processes, and some cost advantages. That said, most people who want to use serverless don’t understand how to do it. Many think that you can take traditional on-premises applications and deem them serverless with the drag of a mouse. The reality is much more complex.  Indeed, serverless application development is more likely a fit for net new applications. Even then you need to consider a few things, mainly that you need to design for serverless. Just as you should design for containers and other execution architectures that are optimized by specific design patterns, serverless is no exception. ... The trick to building and deploying applications on serverless systems is understanding what serverless is and how to take full advantage. We have a tendency to apply all of our application architecture experience to all type of development technologies, and that will lead to inefficient use of the technology, which won’t produce the ROI expected—or worse, negative ROI, which is becoming common.


Author Q&A: Chief Joy Officer

Change is hard. We get used to the way we work and we assume it’s just the way it has to be. Inertia is a big deal. Many of us have tried to make changes in our personal life—our health, our financial situation—only to find out we’re stuck in a rut. We know we need to change our behaviors in order to change our outcomes, but changing human behavior is hard. What probably prevents change more than anything is success. If you’re successful enough, then it’s hard to be convinced of the value of change. You’ll say, well, why should we change when we’re already successful? Of course the problem with success is that it is often fleeting. It’s not like you reach a level of success and then automatically stay there. Every organization, every market, and every business ebbs and flows. When it’s flowing awesomely, we figure we don’t need to change. But when it’s ebbing, we get scared—and sometimes that’s the least opportune time to make a change, because fear can cloud our ability to make the best decisions for our organizations or our teams.


Discover practical serverless architecture use cases


A more complete serverless architecture-based system comes into play with the workloads related to video and picture analysis. In this example, serverless computing enables an as-needed workflow to spin up out of a continuous process, and the event-based trigger pulls in an AI service: Images are captured and analyzed on a standard IaaS environment, with events triggering the use of Amazon Rekognition or a similar service to carry out facial recognition when needed. The New York Times used such an approach to create its facial recognition system that used public cameras around New York's Bryant Park. Software teams can also use serverless designs to aid technical security enforcement. Event logs from any device on a user's platform can create triggers that send a command into a serverless environment. The setup kicks off code to identify the root cause for the logged event or a machine learning- or AI-based analysis of the situation on the device. This information, in turn, can trigger what steps to take to rectify issues and protect the overall systems.


It’s time for the IoT to 'optimize for trust'

The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with SSL. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).



Quote for the day:


"The ability to continuously simplify, while adding more value and removing clutter, is a superpower." -- @ValaAfshar