Daily Tech Digest - June 03, 2019

Cloud computing could look quite different in a few years


Everything may run on the cloud, but running multiple clouds at the same time can still pose challenges, such as compliance with data regulation. Slack — the fastest-growing Software-as-a-Service company on the planet — has already shown how integration can work, and its success is reflected in its trial-to-paid conversion rate, which stands at 30 percent. Slack integrates with other apps such as Trello, Giphy, and Simple Poll so users can access all of them from a single platform. This is something we’ll see increasingly in cloud computing as players large and small look to help businesses and individuals become more efficient and productive. As more and more of life happens in the cloud, the term “cloud” could disappear altogether (and companies like mine, with “cloud” in their name may need to rethink their branding). What we now call “cloud computing” will simply be “computing.” And maybe, by extension, “as-a-Service” will disappear, too, as SaaS replaces traditional software. In tech, you can never be certain of the direction of travel. Things change quickly and in unexpected ways, and some of the changes we’ve seen over even the past 10 years would have been inconceivable just a few years before.



Diversifying the high-tech talent pool

Entrepreneurs always find a way. I’ve never considered being a woman or a Latina to be an obstacle. In fact, I usually consider it to be quite an asset, in part due to the incredible entrepreneurial culture of the Hispanic community in general and my family in particular. There are so many challenges to starting your own business at 25 years old, including insufficient access to affordable capital, top talent, and customers. These obstacles can be overcome only through consistent growth; that in turn can be accomplished only by consistently reinvesting back into Pinnacle. In many ways, everything we have achieved has only been made possible by the simple philosophy of investing back into the business, which is a message I share with other entrepreneurs every chance I get. ... The successful firms — Pinnacle included — have embraced these technologies and adapted their business models and service offerings accordingly. Others have chosen to sell, resulting in our industry consolidating somewhat over the years. No matter what, the one thing we will always be able to count on is change, so we’re making the investments today to be ready for tomorrow.


What are edge computing challenges for the network?


In the ongoing back and forth between centralized and decentralized IT, we are beginning to see the limitations of a centralized IT that relies on hundreds or thousands of industry standard servers running a host of applications in consolidated data centers. New types of workloads, distributed computing and the advent of IoT have fueled the rise of edge computing. ... When compute resources and applications are centralized in a data center, enterprises can standardize both technical security and physical security. It's possible to build a wall around the resources for easier security. But edge computing forces businesses to grapple with enforcing the same network security models and the physical security parameters for more remote servers. The challenge is the security footprint and traffic patterns are all over the place. ... The need for edge computing typically emerges because disparate locations are collecting large amounts of data. Enterprises need an overall data protection strategy that can comprehend all this data.


Empowering robotic process automation with a bot development framework

The bot development framework is a methodology, which standardizes bot development throughout the organization. It is a template or skeleton providing generic functionality and can be selectively changed by additional user-written code. It adheres to the design and development guidelines defined by the Center of Excellence (CoE), performs testing, and provides application access. This speeds up the development process and makes it simple and convenient enough for business units to create bots with no or minimum help from RPA team. It helps saving time in development, testing, building, deploying and execution. ... Define frequently changing variables in a central configuration fileCommon data such as application URLs, orchestrator queue names, maximum retry numbers, timeout values, asset names, etc. are prone to updates frequently. It is recommended to create a “configuration file” to store these data in a centralized location. This will increase process efficiency by saving the time needed to access multiple applications.


Experts: Enterprise IoT enters the mass-adoption phase

IoT > Internet of Things > network of connected devices
That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge. “That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey. Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said. One of the central problems – IoT security – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities.



What does Arm's new N1 architecture mean for Windows servers?

arm-neoverse-n1-architecture.jpg
The AWS A1 Arm instances are for scale-out workloads like microservices, web hosting and apps written in Ruby and Python. Like Cloudflare's workloads, those are tasks that benefit from the massive parallelisation and high memory bandwidth that Arm provides. Inside Azure, Windows Server on Arm is running not virtual machines — because emulating x86 trades off performance for low power — but highly parallel PaaS workloads like Bing search index generation, storage and big data processing. For the first time, an Arm-based supercomputer (built by HPE with Marvell ThunderX2 processors) is on the list of the top 500 systems in HPC — another highly parallel workload. And the next-generation Arm Neoverse N1 architecture is designed specifically for servers and infrastructure. Part of that is Arm delivering a whole server processor reference design, not just a CPU spec, making it easier to build N1 servers. The first products based on N1 should be available in late 2019 or early 2020, with a second generation following in late 2020 or early 2021.


The World Economic Forum wants to develop global rules for AI


The issue is of paramount importance given the current geopolitical winds. AI is widely viewed as critical to national competitiveness and geopolitical advantage. The effort to find common ground is also important considering the way technology is driving a wedge between countries, especially the United States and its big economic rival, China. “Many see AI through the lens of economic and geopolitical competition,” says Michael Sellitto, deputy director of the Stanford Institute for Human-Centered AI. “[They] tend to create barriers that preserve their perceived strategic advantages, in access to data or research, for example.” A number of nations have announced AI plans that promise to prioritize funding, development, and application of the technology. But efforts to build consensus on how AI should be governed have been limited. This April, the EU released guidelines for the ethical use of AI. The Organisation for Economic Co-operation and Development (OECD), a coalition of countries dedicated to promoting democracy and economic development, this month announced a set of AI principles built upon its own objectives.


Data Architect's Guide to Containers

Data Architect's Guide to Containers: How, When, and Why to Use Containers with Analytics
From the perspective of the analyst or data scientist, containers are valuable for a number of reasons. For one thing, container virtualization has the potential to substantively transform the means by which data is created, exchanged, and consumed in self-service discovery, data science, and other practices. The container model permits an analyst to share not only the results of her analysis, but the data, transformations, models, etc. she used to produce it. Should the analyst wish to share this work with her colleagues, she could, within certain limits, encapsulate what she’s done in a container. In addition to this, containers claim to confer several other distinct advantages—not least of which is a consonance with DataOps, DevOps and similar continuous software delivery practices—that I will explore in this series. To get a sense of what is different and valuable about containers, let’s look more closely at some of the other differences between containers, VMs, and related modes of virtualization. ... Unlike a VM image, the ideal container does not have an existence independent of its execution. It is, rather, quintessentially disposable in the sense that it is compiled at run time from two or more layers, each of which is instantiated in an image. Conceptually, these “layers” could be thought of as analogous to, e.g., Photoshop layers: by superimposing a myriad of layers, one on top of the other, an artist or designer can create a rich final image.


Business leaders failing to address cyber threats 


Despite this, the majority (71%) of the C-suite concede that they have gaps in their knowledge when it comes to some of the main cyber threats facing businesses today. This includes malware(78%), despite the fact that 70% of businesses admit they have found malware hidden on their networks for an unknown period of time. When a security breach does happen, in the majority of businesses surveyed, it is first reported to the security team (70%) or the executive/senior management team (61%). In less than half of cases is it reported to the board (40%). This is unsurprising, the report said, in light of the fact that one-third of CEOs state that they would terminate the contract of those responsible for a data breach. The report also reveals the only half of CISOs say they feel valued by the rest of the executive team from a revenue and brand protection standpoint, while nearly a fifth (18%) of more than 400 CISOs questioned in a separate poll say they believe the board is indifferent to the security team or actually sees them as an inconvenience.


Executive's guide to prescriptive analytics


Any data that creates a picture of the present can be used to create a descriptive model. Common types of data are customer feedback, budget reports, sales numbers, and other information that allows an analyst to paint a picture of the present using data about the past. A thorough and complete descriptive model can then be used in predictive analysis to build a model of what's likely to happen in the future if the organization's current course is maintained without any change. Predictive models are built using machine learning and artificial intelligence, and take into account any potential variables used in a descriptive model. Like a descriptive analysis, a predictive model can be as broadly or as narrowly focused as a business needs it to be. Predictive models are useful, but they aren't designed to do anything outside of predicting current trends into the future. That's where prescriptive analytics comes in. A good prescriptive model will account for all potential data points that can alter the course of business, make changes to those variables, and build a model of what's likely to happen if those changes are made.



Quote for the day:


"The question isn't who is going to let me; it's who is going to stop me." -- Ayn Rand


Daily Tech Digest - June 02, 2019

The future of system architecture

sdn software defined network architecture
So far, the primary effect of any API-first mandate has been to make developers ensure they document their APIs and publicize them. But a major thrust of the Amazon API-first mandate was to reduce the costs incurred from developing duplicate capabilities in multiple systems. Because most enterprises do not update all their systems every few years, any API-first mandate will take time to show real effects in the enterprise. But over time, those effects will make themselves felt, especially when an API-first mandate is combined with a reuse-before-build mandate that requires system developers to reuse capabilities available in the enterprise before building new ones. As more systems make their capabilities available through APIs, and development teams are tasked to reuse before building, there will come a point at which building new systems is replaced by recomposing existing capabilities into new capabilities. The amount of duplication across systems with widely varying purposes is surprising. Most systems need a way to store and retrieve data. Most systems need a way to authenticate and authorize users. Most systems need the ability to display text and render graphics.



Is this the future of retail? 7-Eleven launches checkout-free store

Australia’s largest convenience retailer is making a move on checkout-free, launching a “cashless and cardless” concept store in Richmond, Melbourne today. The store will allow customers to pair their cards with a smartphone app, scan items with their cameras, and then walk out. It’s a similar system to the one trialled by Woolworths in Sydney last year and follows the success of Amazon’s no-checkout grocery stores in the US. 7-Eleven chief executive Angus McKay said he’s on a mission to push the envelope on convenience retailing. “We’re trying to push the notion of ‘convenience’ to its absolute limit,” McKay said in a statement circulated on Wednesday morning. “In the new concept store, customers will notice the absence of a counter. The store feels more spacious and customers avoid being funnelled to a checkout location creating a frictionless in-store experience,” he said. The announcement follows a trial run out of an Exhibition Street store in Melbourne, although 7-Eleven hasn’t detailed plans for any further expansion of the concept as yet.


How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh


As more data becomes ubiquitously available, the ability to consume it all and harmonize it in one place under the control of one platform diminishes. Imagine just in the domain of 'customer information', there are an increasing number of sources inside and outside of the boundaries of the organization that provide information about the existing and potential customers. The assumption that we need to ingest and store the data in one place to get value from diverse set of sources is going to constrain our ability to respond to proliferation of data sources. I recognize the need for data users such as data scientists and analysts to process a diverse set of datasets with low overhead, as well as the need to separate the operational systems data usage from the data that is consumed for analytical purposes. But I propose that the existing centralized solution is not the optimal answer for large enterprises with rich domains and continuously added new sources. Organizations' need for rapid experimentation introduces a larger number of use cases for consumption of the data from the platform.


The Intersection of Innovation, Enterprise Architecture and Project Delivery

5 Questions to Ask of Enterprise Architecture
Peter Drucker famously declared “innovate or die.” But where do you start? Many companies start with campaigns and ideation. They run challenges and solicit ideas both from inside and outside their walls. Ideas are then prioritized and evaluated. Sometimes prototypes are built and tested, but what happens next? Organizations often turn to the blueprints or roadmaps generated by their enterprise architectures, IT architectures and or business process architectures for answers. They evaluate how a new idea and its supporting technology, such as service-oriented architecture (SOA) or enterprise-resource planning (ERP), fits into the broader architecture. They manage their technology portfolio by looking at their IT infrastructure needs. A lot of organizations form program management boards to evaluate ideas, initiatives and their costs. In reality, these evaluations are based on lightweight business cases without broader context. They don’t have a comprehensive understanding of what systems, processes and resources they have, what they are being used for, how much they cost, and the effects of regulations.


When algorithms mess up, the nearest human gets the blame


“While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved. This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver. “We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with. Yet in the current regulatory vacuum, they will continue to pay the steepest cost.


The Death of Enterprise Architecture: defeating the DevOps, microservices, ...

Current application theory says that all responsibility for software should be pushed down to the actual DevOps-style team writing, delivering, and running the software. This leaves Enterprise Architect role in the dust, effectively killing it off. In addition to this being disquieting to Enterprise Architects out there who have steep mortgage payments and other expensive hobbies, it seems to drop out the original benefits of enterprise architecture, namely oversight of all IT-related activities to make sure things both don't go wrong (e.g., with spending, poor tech choices, problematic integration, etc.) and that things, rather, go right. Michael has spoken with several Enterprise Architecture teams over on the changing nature of how Enterprise Architecture help in a DevOps- and cloud-native-driven culture. He will share their experiences including what type of Enterprise Architecture is actually needed, tactics for transitioning and when it's best to just kill off Enterprise Architecture and let the DevOps cowboys run wild.


Address goals with various enterprise architecture strategies


Enterprise architecture can also revolve around important application decisions, rather than a diagram of software stacks. In the context of software architecture, decisions include the programming language, platform, type of cloud services used, CI/CD systems involved in deployment, unit tests, the data-interchange format for the API, where the APIs are registered and related systems. For some programmers, the term architecture means a look at just the highest level of design: a set of domain objects that interrelate, such as customer, order and claim. Another view of enterprise architecture in the technical realm revolves around quality attributes. These attributes must exist for the software to work, but are unlikely to fit in a specification document. Examples include reliability, capacity, scalability and security -- even things such as uptime, measuring and monitoring levels, rollback approach, delivery cadence, time to build and time to deploy. Quality elements are not functional requirements, per se, but are ways to determine acceptable operating conditions and necessary tradeoffs to get there.


What You Need to Know about Programmable Logic Controller (PLC)


Nowadays, dedicated pieces of software have been developed for the PC in order to help with PLC programming. Once the program is written, it is then downloaded from the computer to the PLC with a special cable. In the old days, up until the mid-1990s, PLCs were programmed by using either special purpose programming terminals or proprietary programming panels. Often times, they had function keys which represented the logical elements of PLC programs. As far as storing goes, programs would get put on cassette tape cartridges. A popular form of programming is ladder logic, which is the most widely used one. It features symbols (as opposed to words) in order to emulate relay logic control, with the symbols being interconnected by lines, representing the flow of current. As the years went on, the number of symbols available has increased, thus increasing the level of functionality that PLCs have.  


Scanning the fintech landscape
Tala and Branch both seek to offer microlending over mobile devices in developing countries. The US-based companies make real-time loan decisions dynamically by using every piece of information they can gather from the customer’s mobile phone; public reports note that the companies use text messages, contacts, and hundreds of other data points to make underwriting decisions. A new set of companies are developing demographically-focused products. They segment not only from a brand and marketing perspective, but from a product innovation perspective as well. For example, True Link Financial’s elder fraud protections, Finhabits’ saving focus for Latino’s, Camino Financial’s lending for Latino-owned small and medium size businesses, or Ellevest’s product design for women all go beyond branding to design products from scratch with unique use cases and features in mind. Similarly, Brex offers cards tailored individually for startups, for ecommerce companies, and (reportedly) for other small business segments.


Five-Step Action Plan for DevOps at Scale

To give you a practical example of how these steps come together, consider the story of a large manufacturing enterprise with which we had the opportunity to work. They began their enterprise DevOps adoption with a pilot project in which they migrated their database to an AWS data lake. The project quickly showed how DevOps could create greater scalability to support the data demands of the manufacturer’s IoT applications. The manufacturer’s Center of Excellence leveraged this initial success to apply DevOps and digital transformation across company’s various departments, applying the model above to departments like enterprise architecture, application development and even business units like credit services. With the initial pilot project focused on a well-defined migration to AWS, the outcome has been the company’s agile adoption of DevOps for greater security, cost efficiencies and reliability. The idea of enterprise DevOps at scale can be daunting -- especially for large enterprises with complex systems, complicated processes and a great deal of technical debt.



Quote for the day:


"Leadership does not depend on being right." -- Ivan Illich


Daily Tech Digest - June 01, 2019


This challenge will only be amplified as the amount of data available to retailers increases: The market for retail Internet of Things (IoT) sensors, RFID tags, beacons and wearables is projected to grow 23% annually through 2025, which will generate data needed for targeted customer experiences and optimized operations. As retail consumers increasingly live and shop across multiple channels, a new strategy for analytics is needed to take advantage of all that additional data. Single data pipelines that slow learning abstraction and decision-making based on those insights are not the right fit for this new paradigm. A single data pipeline prevents analytics from delivering insights at the pace needed by line-of-business decision makers. In an SVOT world, employees often lose patience with the process and attempt do-it-yourself strategies with data. An environment where marketing, sales, demand planning, supply chain, operations and finance each apply their own tools, filters and data-modeling decisions will result in a multitude of interpretations, even if they start from the same pile of data.


Despite mounting evidence of the substantial benefits provided by analytics, most companies have barely scratched the surface of what is possible. The good news is that the tide is turning. The field is increasingly attracting new talent, who are introducing new skills such as data science and statistics to the realm of HR. This helps to further progress, as does the advance in technologies enabling real-time data collection and analysis of unstructured, as well as structured, data. Consequently, the growth of these skills is set to continue to rise exponentially. Building a people analytics function coupled with capitalizing on technologies that collect, store, and dynamically visualize data enables companies to put information at the fingertips of the business leaders to support decision-making. Moreover, this democratization of data can also help managers by providing data on their own behaviors, as well as providing them with insights that support employee engagement, development, and performance.



PCI Express 5.0 finalized, but 4.0-compatible hardware is only now shipping  

virtualizationistock-894624056natalyayudina.jpg
On its own merits, PCIe 5.0 is impressive, doubling the transfer rates from PCIe 4.0, which in turn doubled transfer rates from PCIe 3.0. In terms of practical deployments, a PCIe 5.0 x1 slot delivers the same bandwidth (~4GB/s) as a full-size, first-generation PCIe x16 slot from 2003, commonly used in graphics cards. In terms of practical deployment, it is likely to be some time before PCIe 5.0 devices arrive, though it is possible that Intel may skip PCIe 4.0 entirely, as their Compute Express Link (CXL) technology for connecting FPGA-based accelerators is based on PCIe 5.0. This should be taken with a grain of salt—rumors indicated that Intel planned to skip a 10nm manufacturing process, in favor of moving to 7nm, following low yields on 10nm parts. Intel's Computex announcements show 10nm plans for mobile systems, though desktop-class CPUs have yet to be announced. From an implementation standpoint, the technical complexity between 4.0 and 5.0 is lower than 3.0 and 4.0, making it likely to see a quick upgrade for existing 4.0 designs.


Sustainable Operations in Complex Systems With Production Excellence


Production excellence is a set of skills and practices that allow teams to be confident in their ownership of production. Production-excellence skills are often found among SRE teams or individuals with the SRE title, but it ought not be solely their domain. Closing the feedback loop on production ownership requires us to spread these skills across everyone on our teams. Under production ownership, operations become everyone's responsibility rather than “someone else's problem”. Every team member needs to have a basic fluency in operations and production excellence even if it's not their full-time focus. And teams need support when cultivating those skills and need to feel rewarded for them. There are four key elements to making a team and the service it supports perform predictably in the long term. First, teams must agree on what events improve user satisfaction and eliminate extraneous alerts for what does not. Second, they must improve their ability to explore production health, starting with symptoms of user pain rather than potential-cause-based exploration.


A Quantum Revolution Is Coming

uncaptioned
Now, individuals and entities across NGIOA are part of an entangled global system. Since the ability to generate and manipulate pairs of entangled particles is at the foundation of many quantum technologies, it is important to understand and evaluate how the principles of quantum physics translate to the survival and security of humanity. If an individual human is seen as a single atom, is our behavior guided by deterministic laws? How does individual human behavior impact the collective human species? How is an individual representative of how collective systems, whether they be economic to security-based systems, operate? Acknowledging this emerging reality, Risk Group initiated a much-needed discussion on Strategic Impact of Quantum Physics on Financial Industry with Joseph Firmage, Founder & Chairman at National Working Group on New Physics based in the United States, on Risk Roundup.


CIO interview: Sam Shah, director for digital development, NHS England


Shah believes the effective use of standards across emerging technology will help break forms of supplier lock-in that have previously characterised much of the provision of NHS systems and services. To help encourage providers generate innovative solutions to business challenges in the health service, Shah says the sector needs to be a more attractive place for IT suppliers. “We’re keen to help – we want to generate grants to help innovators in the UK work in partnership with the NHS,” he says. “We have an entire network of academics and scientists that support our work. And we have a much more open approach to development, so that suppliers can start working with the NHS in a more meaningful way. “As we amass more data and connect more datasets, we have an opportunity to bring about precision public health to reduce inequalities and to reduce the burden on society. We can create precision medicine that allows clinicians to prescribe much more precisely around the needs of the patient and their optimal needs. Our world is becoming more data-driven, but we need help from suppliers to deliver these services.”



Put simply, location intelligence is the ability to derive business insights from geospatial information. Those with well-developed location intelligence abilities use GIS, maps, data, and analytical skills to solve real-world problems, specifically business problems. This is an important distinction. Location intelligence is primarily a business term that refers to solving business problems. GIS may be the technical foundation of location intelligence, but it’s not the same thing. ... In reality, when you factor location into analysis, you open up a world of opportunity. Specifically, you make it possible to tackle a unique set of problems. Think about an offshore oil company trying to predict and monitor sea ice activity. Rogue icebergs or shifting ice floes, driven by global climate change, pose a tremendous risk to the safe operation of offshore oil rigs and shipping vessels. Mitigation of sea ice risk is inherently about predicting and its monitoring the location of sea ice: its size, shape, and speed and the consequences if it impacts an oil platform.


European Union Votes to Create a Huge Biometrics Database


The identity records will include names, dates of birth, passport numbers, and other ID information. The biometrics details meanwhile include the fingerprints and facial scans. The primary aim of the biometric database is to make it easier for EU border and law enforcement personnel to search for people’s information faster. This is an upgrade to the current system of going through different databases when looking for information. The interoperability of the CIR will ensure that the law enforcement officers have fast, seamless, systematic and controlled access to the information that they need to perform their tasks. It would also detect multiple identities linked to the same set of biometric data and facilitate identity checks of third-country nationals (TCNs), on the territory of a Member State, by police authorities. The CIR for third-country citizens would enable identification of TCNs that lack proper travel documents. 


uncaptioned
With regards to a blockchain platform that offers a space for content creators to go about their business unheeded, there is a lot of potential, and already some use-cases of a decentralised content platform that has an incentivisation program already attached. Many are aware of Steemit, within the blockchain sphere, which is a blogging and social networking website that uses the Steem blockchain to reward publishers and curators. It is a useful service as because of its decentralised nature, there should be no censorship - but that is in question because there is still Steemit Inc heading up the entire operation. But in principle, a fully decentralised content platform allows for free reign regarding posting, and because of the token economy associated with it, there is monetisation, as well as crowd sentiment driving the content. Many will worry about hate speech and other dangers being pronounced on these decentralised platforms, but in quite a libertarian viewpoint, this will only be as successful as the demand for it.


WebAssembly and Blazor: A Decades Old Problem Solved

In mid-April 2019, Microsoft gently nudged a young framework from the "anything is possible" experimental phase to a "we're committed to making this happen" preview. The framework, named Blazor because it runs in the browser and leverages a templating system or "view engine" called Razor, enables the scenario .NET developers almost gave up on. It doesn't just allow developers to build client-side code with C# (no JavaScript required), but also allows developers to run existing .NET Standard DLLs in the browser without a plugin. ... HTML5 and JavaScript continued to win the hearts and minds of web developers. Tools like jQuery normalized the DOM and made it easier to build multi-browser applications, while at the same time browser engines started to adopt a common DOM standard to make it easier to build once and run everywhere. An explosion of front-end frameworks like Angular, React, and Vue.js brought Single Page Applications (SPA) mainstream and cemented JavaScript as the language of choice for the browser operating system.



Quote for the day:


"Great spirits have always encountered violent opposition from mediocre minds." -- Albert Einstein


Daily Tech Digest - May 31, 2019

How To Identify What Technologies To Invest In For Digital Transformation

How To Identify What Technologies To Invest In For Digital Transformation
There are many aspects of the experience, but if you look at the central pillars of a great experience, it comes down to the acronym “ACT.” The “A” pillar of ACT is anticipation. The platform must anticipate what the customer or employee needs when using the platform. A second pillar, C, reminds that their experience must be complete. The platform should not put the burden of tasks on the customer or employee; it should run the activity to its completion and deliver a satisfying, complete result back to the customer or employee. The third pillar, T, represents the timeliness factor. The experience needs to be performed in a time frame that is relevant and consistent with customer or employee expectations. An example is in sales where the company has 45 minutes (or perhaps two days) to complete the stakeholder’s need. The time is not about response time; it’s about the appropriate amount of time that the individual gives the company to get to a complete answer. It could be seconds, hours or days.




The digital twin is an evolving digital profile of the historical and current behavior of products, assets, or processes and can be used to optimize business performance. Based on cumulative, real- time, real-world data measurements across an array of dimensions, the digital twin depends on connectivity—and the IIoT—to drive functionality. Amid heightened competition, demand pressures, inaccurate capacity assumptions, and a suboptimal production mix, one manufacturing company sought ways to drive operational improvements, accelerate production throughput, and promote speed to market. At the same time, however, the manufacturer was hampered by limited visibility into its machine life cycles, and knew relatively little about resource allocation throughout the facility. To gain deeper insight into its processes—and to be able to simulate how shifts in resources or demand might affect the facility—the manufacturer used sensors to connect its finished goods and implement a digital twin.



How iRobot used data science, cloud, and DevOps

irobot-terra-hero.jpg
The core item in the new design language is the circle in the middle of the robots. The circle represents the history of iRobot, which featured a bevy of round Roomba robots. "The circle is a nod back to the round robots and gives us the ability to be more expansive with geometries," he explains. But iRobot 2.0 also represents the maturation of iRobot. "Innovation at iRobot started back in the early days with a toolkit of robot technology. Innovation was really about market exploration and finding different ways for the toolkit to create value," Angle says. Through that lens, iRobot explored everything from robots for space exploration to toys to industrial cleaning and medical uses. "Our first 10 to 15 years of history is fraught with market exploration," Angle says. Ultimately, iRobot, founded in 1990, narrowed its focus to defense, commercial and consumer markets before focusing solely on home robots. iRobot divested its commercial and its military robot division, which was ultimately acquired by FLIR for $385 million.


The Defining Role of Open Source Software for Managing Digital Data


Open source use is accelerating and driving some of the most exciting ventures of modern IT for data management. It is a catalyst for infusing innovation. For example, Apache Hadoop, Apache Spark, and MongoDB in big data; Android in mobile; OpenStack and Docker in Cloud; AngularJS, Node.js, Eclipse Che, React, among others in web development; Talend and Pimcore in data management; and TensorFlow in Machine learning. Plus, the presence of Linux is now everywhere—in the cloud, the IoT, AI, machine learning, big data, and blockchain. This ongoing adoption trend of open source software, especially in data management, will intensify in the coming time. The capability of open source has a certain edge as it does not restrain IT specialists and data engineers to innovate and make the use of data more pervasive. In my experience, successful data management depends upon on breaking down data silos in the enterprise with a consolidated platform in place for rationalizing old data as well as deploying new data sources across the enterprise.


DevOps security best practices span code creation to compliance


Software security often starts with the codebase. Developers grapple with countless oversights and vulnerabilities, including buffer overflows; authorization bypasses, such as not requiring passwords for critical functions; overlooked hardware vulnerabilities, such as Spectre and Meltdown; and ignored network vulnerabilities, such as OS command or SQL injection. The emergence of APIs for software integration and extensibility opens the door to security vulnerabilities, such as lax authentication and data loss from unencrypted data sniffing. Developers' responsibilities increasingly include security awareness: They must use security best practices to write hardened code from the start and spot potential security weaknesses in others' code.Security is an important part of build testing within the DevOps workflow, so developers should deploy additional tools and services to analyze and evaluate the security posture of each new build.
Chief artificial intelligence officer
The CAIO might not be at the Executive Committee level, but beware the various other departments reaching out to own the role. AI often gets its initial traction through innovation teams – but is then stymied in the transition to broader business ownership. The IT function has many of the requisite technological skills but often struggles to make broader business cases or to deliver on change management. The data team would be a good home for the CAIO, but only if they are operating at the ExCom level: a strong management information (MI) function is a world away from a full AI strategy. Key functions may be strong users of AI  –  digital marketing teams or customer service teams with chatbots, for example  – but they will always be optimising on specific things.  So, who will make a good CAIO? This is a hard role to fill — balancing data science and technology skills with broader business change management experience is a fine line. Ultimately it will be circumstances that dictate where the balance should be struck. Factors include the broader team mix and the budget available, but above all the nature of the key questions that the business faces.


Researcher Describes Docker Vulnerability

Researcher Describes Docker Vulnerability
Containers, which have grown in popularity with developers over the last several years, are a standardized way to package application code, configurations and dependencies into what's known as an object, according to Amazon Web Services. The flaw that Sarai describes is part of Docker's FollowSymlinkInScope function, which is typically used to resolve file paths within containers. Instead, Sarai found that this particular symlink function is subject to a time-to-check-time-to-use, or TOCTOU, bug. ... But a bug can occur that allows an attacker to modify these resource paths after resolution but before the assigned program starts operating on the resource. This allows the attack to change the path after the verifications process, thus bypassing the security checks, security researchers say. "If attackers can modify a resource between when the program accesses it for its check and when it finally uses it, then they can do things like read or modify data, escalate privileges, or change program behavior," Kelly Shortridge, vice president of product strategy at Capsule8, a security company that focuses on containers, writes in a blog about the this Docker vulnerability.


JDBC vs. ODBC: What's the difference between these APIs?

Many people associate ODBC with Microsoft because Microsoft integrates ODBC connectors right into its operating system. Furthermore, Microsoft has always promoted Microsoft Access as an ODBC-compliant database. In reality, the ODBC specification is based upon the Open Group's Call Level Interface specification, and is supported by a variety of vendors. The JDBC specification is owned by Oracle and is part of the Java API. Evolution of the JDBC API, however, is driven by the open and collaborative JCP and Java Specification Requests. So while Oracle oversees the API development, progress is largely driven by the user community. Despite the separate development paths of ODBC and JDBC, both allow support of various, agreed-upon specifications by RDBMS vendors. These standards are set by the International Standards Organization's data management and interchange committee, and both JDBC and ODBC vendors work to maintain compliance with the latest ISO specification. 


LinkedIn Talent Solutions: 10 tips for hiring your perfect match

Best practices for hiring and recruiting on LinkedIn
The product uses AI to recommend relevant candidates that could be a good fit for an available role, and it leverages analytics to make recommendations in real time as you’re crafting your job description. LinkedIn Recruiter and Jobs also allows companies to target open roles using LinkedIn Ads to reach relevant candidates. In the new Recruiter and Jobs, talent professionals no longer have to jump back and forth between Recruiter and Jobs; the update puts search leads and job applicants for an open role within the same project, viewable on a single dashboard. Candidates can then be saved to your Pipeline, where they’ll move through the later stages of the hiring process. ... Finally, LinkedIn Pages allows organizations of any size to showcase their unique culture and employee experience by posting employee-created content, videos and photos. Candidates can visit and organization’s page to see what your organization has to offer, as well as get personalized job recommendations and connect with employees like them, according to LinkedIn. Real-time page analytics can identify who’s engaging with your organization’s page and which content is making the greatest impact.


Sidecar Design Pattern in Your Microservices Ecosystem

Segregating the functionalities of an application into a separate process can be viewed as a Sidecar pattern. The sidecar design pattern allows you to add a number of capabilities to your application without additional configuration code for third-party components. As a sidecar is attached to a motorcycle, similarly in software architecture a sidecar is attached to a parent application and extends/enhances its functionalities. A sidecar is loosely coupled with the main application. Let me explain this with an example. Imagine that you have six microservices talking with each other in order to determine the cost of a package. Each microservice needs to have functionalities like observability, monitoring, logging, configuration, circuit breakers, and more. All these functionalities are implemented inside each of these microservices using some industry standard third-party libraries. But, is this not redundant? Does it not increase the overall complexity of your application?



Quote for the day:


"The essential question is not, "How busy are you?" but "What are you busy at?" -- Oprah Winfrey


Daily Tech Digest - May 30, 2019

GDPR - Data Privacy And The Cloud

GDPR - Data Privacy and the Cloud - CIO&Leader
The recent and rapid transition to multi-cloud networks, platforms, and applications complicates this challenge. To meet data privacy requirements in such environments, organizations need to implement security solutions that span the entire distributed network in order to centralize visibility and control. This enables organizations to provide consistent data protections and policy enforcement, see and report on cyber incidents, and remove all instances of PII on demand. Achieving this requires three essential functions: Security needs to span multi-cloud environments. Compliance standards need to be applied consistently across the entire distributed infrastructure. While privacy laws may belong to a specific region, the cloud makes it easy to cross these boundaries. ... Compliance reporting requires centralized management. Compliance reporting needs to span the entire distributed infrastructure. As with other requirements, this also demands consistent integration throughout the cloud and with the on-premise security infrastructure. Achieving this requires the implementation of a central management and orchestration solution


Disruption, data and the changing role of the CIO

This paradigm shift is a necessary result of the accelerated pace of technological change and increased pressure to adopt emerging technologies to avoid falling behind competitors. One possible response is to cling to the old ways, that is, to slow down adoption of 4IR technologies, and to resist the democratization of technology. But the risks of this approach, tempting as it might be given the sometimes overwhelming challenges, are high. First, a rigid or cumbersome process for adopting technologies will surely mean that competitors are moving forward faster. Second, a company that resists the democratization of technology may discourage potential employees who are intellectually curious. Further, such resistance to change may limit the potential of employees by signaling that compliance is more important than creativity. While having a heavy foot on the brake is a problem, a CIO who is pushing too hard on the accelerator isn’t the solution. The temptation is understandable.


Top 10 Future Trends In Android Development You Cannot Miss In 2019

IOT apps future trends in android development
Yes! People can now command the smart devices to perform basic routine activities and these devices will interact with the machine to run, stop, and function through the internet connection. Internet of Things (IoT) refers to the increased interconnectedness among different smart devices through the internet. It is one step ahead in device-to-machine interaction. For this, the smart devices should feature internet connection and sensors in order to allow the device to gather, receive, and transfer the information. It’s very much easy to operate and control the smart TV or a toaster in the kitchen or an air conditioner in the living room or a treadmill in the gym area through the smart devices. ... It’s fascinating that the wearables market is thriving and alive. Smart wearables are basically the use of technology which is worn on the body, close to the body or in the body. There’s no doubt about the trends in Wearables will go a step ahead to get many tasks done from a single smart device. Be it playing a game from a VR glass, from a smartwatch or from other Android wearables. Be it having a moving nurse with you to track your health through a smart belt, smartwatch or smart glasses.


Hackers targeting UK universities a threat to national security


In light of this, and the threat research programmes are under, 10% of 75 senior IT leaders polled by Vanson Bourne research “strongly agree” that a successful attack could have a harmful impact on the lives of UK citizens. Findings also show that nearly a quarter (24%) of UK universities polled believe their security and defence research may have already been infiltrated, while over half (53%) say a cyber attack on their institution has led to research ending up in foreign hands. “British universities have long been celebrated around the world for their academic excellence, and the role they play in not only driving technological and social innovation through research, but also advances in defence and security,” said Louise Fellows, director, public sector UK and Ireland, at VMware. “Keeping pace with today’s sophisticated cyber threats is an enormous challenge. Those responsible for protecting universities and the data they hold must examine how they can evolve practices and approaches in line with an increasingly complex threat landscape, including cyber security as a consideration at every stage of the research process by design,” she said.


Natural language processing explained

Natural language processing explained
Like any other machine learning problem, NLP problems are usually addressed with a pipeline of procedures, most of which are intended to prepare the data for modeling. In his excellent tutorial on NLP using Python, DJ Sarkar lays out the standard workflow: Text pre-processing -> Text parsing and exploratory data analysis -> Text representation and feature engineering -> Modeling and/or pattern mining -> Evaluation and deployment.  Sarkar uses Beautiful Soup to extract text from scraped websites, and then the Natural Language Toolkit (NLTK) and spaCy to preprocess the text by tokenizing, stemming, and lemmatizing it, as well as removing stopwords and expanding contractions. Then he continues to use NLTK and spaCy to tag parts of speech, perform shallow parsing, and extract Ngram chunks for tagging: unigrams, bigrams, and trigrams. He uses NLTK and the Stanford Parser to generate parse trees, and spaCy to generate dependency trees and perform named entity recognition. 


Baltimore Ransomware Attack Triggers Blame Game

The Times reports that the exploit was used numerous times, and proved very valuable for intelligence operations over a five-year period, before the agency lost control of it. Only then did the NSA alert Microsoft to the flaw, leading to it quickly issuing patches. And now Baltimore is one of the latest victims of attackers exploiting the flaw, the Times reports. The short list of who to potentially blame for the Baltimore incident now includes: the National Security Agency, for building the exploit and holding onto it for five years, without alerting Microsoft, before losing control of it; the shadowy group - maybe foreign, maybe domestic - calling itself the Shadow Brokers, which leaked the exploit in April 2017; Microsoft, for not building bug-free operating systems; the city of Baltimore, for having failed to apply an emergency Windows security update more than two years after it was released in March 2017 - and two months later for older operating systems - which blocked EternalBlue exploits in every Windows operation system from XP onward; and, of course, the attackers, whoever they might be.


Code Linux binary
In a technical report published today, Nacho Sanmillan, a security researcher at Intezer Labs, highlights several connections and similarities that HiddenWasp shares with other Linux malware families, suggesting that some of HiddenWasp code might have been borrowed. "We found some of the environment variables used in a open-source rootkit known as Azazel," Sanmillan said. "In addition, we also see a high rate of shared strings with other known ChinaZ malware, reinforcing the possibility that actors behind HiddenWasp may have integrated and modified some MD5 implementation from [the] Elknot [malware] that could have been shared in Chinese hacking forums," the researcher added. ... Hackers appear to compromise Linux systems using other methods, and then deploy HiddenWasp as a second-stage payload, which they use to control already-infected systems remotely.


Going beyond basic cyberhygiene to protect data assets

Skills and career development can start on a small scale, through free, vendor-sponsored programs, convenient online courses, or even at the library. ... By investing in learning as a lifestyle, common challenges such as finding time to sit down and complete a training module become easier to overcome. ... The scale and scope of cybercrime grows every day—new technologies introduce new vulnerabilities faster than they can be secured, and cybercriminals continue to find new ways to attack organizations. By understanding the pattern of evolution in the cyberlandscape and adopting an intelligence-based approach, technology and security professionals can arm themselves for anything that comes their way. As tech pros continue building security skills in daily operations, they take steps beyond basic cyberhygiene. Understanding their IT environment to uncover hidden risks, educating business leaders, leveraging data to show the value of IT efforts, implementing the “right” tools, and investing in training are key to going beyond basic cyberhygiene.


IoT > Internet of Things > network of connected devices
The technology itself has pushed adoption to these heights, said Graham Trickey, head of IoT for the GSMA, a trade organization for mobile network operators. Along with price drops for wireless connectivity modules, the array of upcoming technologies nestling under the umbrella label of 5G could simplify the process of connecting devices to edge-computing hardware – and the edge to the cloud or data center. “Mobile operators are not just providers of connectivity now, they’re farther up the stack,” he said. Technologies like narrow-band IoT and support for highly demanding applications like telehealth are all set to be part of the final 5G spec. ... That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge. “That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey.


Business Associates Reminded of HIPAA Duties

Business Associates Reminded of HIPAA Duties
"Business associates still struggle with their HIPAA Security Rule obligations, in many of the same ways as do covered entities, including with regard to risk analysis, risk management and encryption, for example," says privacy attorney Iliana Peters of the law firm Polsinelli. "Business associates struggle with understanding their obligations to flow down the requirements of their business associate agreements with their own vendors that have access to protected health information." Covered entities and business associates alike must understand the lifecycle of their data so that appropriate HIPAA-required security safeguards are applied, Peters adds. And business associates should periodically conduct "mini-audits" of their security practices to ensure they are meeting obligations spelled out in their BA agreements, she says. Even though business associates became directly liable for HIPAA compliance nearly six years ago, confusion about their duties persists. "Some BAs fail to understand the full scope of their compliance responsibilities," says Kate Borten, president of privacy and security consultancy The Marblehead Group.




Quote for the day:


"If you truly love life, don’t waste time because time is what life is made of." -- Bruce Lee