Daily Tech Digest - April 16, 2018

Busted! Cops use fingerprint pulled from a WhatsApp photo to ID drug dealer

Cops use fingerprint pulled from a WhatsApp photo to ID drug dealer
A bust resulted in the police getting hold of a phone that had a WhatsApp message and image of ecstasy pills in a person’s palm. The message read: “For sale – Skype and Ikea-branded ecstasy pills…are you interested?” The phone was sent to South Wales Police where the photo showing the middle and bottom portion of a pinky was enhanced. As for fingerprint identification, the BBC reported that “a search of the national database did not bring a match” as “when offenders give fingerprints, it is just the top part taken — with the middle and bottom part only occasionally left.” Here’s where it gets a bit confusing, as a different BBC article stated that “other evidence meant officers had an idea who they believed was behind the drugs operation.” Although that makes it sound like tips from locals about “a large number of visitors to one address” was the real way cops found the guy whose partial pinky was in the photo, Dave Thomas of the South Wales Police’s scientific support unit told the BBC, “While the scale and quality of the photograph proved a challenge, the small bits were enough to prove he was the dealer.”



Overclock puts your idle servers to work for other people

Overclock puts your idle servers to work for other people
Once you set up the Akash agent, you are done. Workloads are sent to your servers, they're executed, the results are sent back, and shut down. No intervention is needed on your part. That said, Overclock does provide the necessary tools to configure, deploy, monitor, and manage the workloads. A developer who needs the resources specifies their deployment criteria, such as resources needed, topology, and the price they are willing to pay, in a posting to the Akash blockchain. Providers with server cycles to offer automatically detect the new bid request and programmatically bid to host it. The lowest bid wins the auction, a lease is created, and the parties exchange keys. All of this is done with no human intervention. The Akash agent then begins picking up workloads in Docker containers, orchestrated by Kubernetes and distributed over Akash’s peer-to-peer file sharing protocol. Your applications can be run as is because they run in Docker containers. Payment via the Akash token is also done via the blockchain, allowing for a full audit of transactions by lessors and lessees.



Is Hybrid Cloud Right For Your Organization?

Hybrid cloud is less about using private and public cloud in concert for the same applications — and more about using the right mix of these separate and distinct computing resources to accomplish your organization’s overall IT objectives. As the name suggests, private cloud is a secure, private computing environment in which only a single organization operates. The pubic cloud, meanwhile, includes Amazon Web Services (AWS), Microsoft Azure and the Google Cloud Platform. And common SaaS subscription providers include Salesforce, Office365, Google Apps, Workday and Cisco WebEx. According to IDG, all eyes have been on the public cloud over the last few years, but private and hybrid clouds are set for big growth in 2018. Each of the major IaaS public cloud vendors spent 2017 clarifying their hybrid cloud strategy, setting 2018 to be the year of adoption. The biggest effort has come from Microsoft who finally released Azure Stack, a private cloud IaaS platform that is meant to mirror the Azure public cloud. Deployments of Azure Stack have been hitting the market this year.


Get an AI Head Start: Buy It

(Image: maxuser/Shutterstock)
If you are buying your AI from SAP or Oracle, and your competitor is buying the same thing from SAP or Oracle, how do you get a competitive edge? Isn't that a pretty level playing field? What's the point? Your data itself will be the real competitive edge going forward. AI solutions will become commoditized. But your data remains proprietary and valuable. Flannagan told me that in almost every meeting he has with customers, executives are recognizing that their data has value, either for internal purposes or for selling to a data partner. That's what the third-party experts are saying, too. "Enterprises that are leveraging the AI investments built into enterprise platform software need to look beyond algorithms for competitive differentiation," Purcell told me. "At the end of the day, the machine learning algorithms at the brain of AI are commoditized and widely available in open source as well as vendor technologies. Data will be the key source of competitive differentiation in the world of AI -- emerging data sources, innovative data transformations, and business-infused data understanding will lead to better models and ultimately better results from AI."


Large Majority of Businesses Store Sensitive Data in Cloud Despite Lack of Trust

Survey results show once it's in the cloud, this information is at risk. One in four organizations using infrastructure-as-a-service (IaaS) or software-as-a-service (SaaS) has had their data stolen. One in five has been hit with an advanced attack against their public cloud infrastructure. McAfee researchers discovered an overall decline in the "cloud-first" mentality, with only 65% of respondents reporting a cloud-first strategy compared with 82% one year ago. This drop can be attributed to two factors, says Vittorio Viarengo, vice president of marketing for McAfee's Cloud Business Unit. The first is a growing awareness of the responsibility that comes with storing data in the public cloud. "Customers are realizing they're still on the hook to provide security for some of the things that happen in the cloud," he explains. They're learning, for example, service providers don't ensure their logins are properly set up, or the security risks of remote employees using cloud services. They're learning what they're responsible for when they use IaaS platforms versus SaaS.


It's time to rebuild the web

Stone wall
We'd also need to avoid many of the privacy and security flaws that were rampant in the early internet, and for which we're still paying. That technical debt came due a long time ago. Paying off that debt may require some complex technology, and some significant UI engineering. All too often, solutions to security problems make things more difficult for both users and attackers. Crowdflare's new 1.1.1.1 service addresses some basic problems with our DNS infrastructure and privacy, and their CEO proposes some more basic changes, like DNS over HTTPS. But even simple changes like this require non-technical users to change configuration settings that they don't understand. This is where we really need the help of UX designers. We can't afford to make "safe" difficult. And we'd have to admit that our current web, with all its flaws, evolved from these simple building blocks. To some extent, then, it's what we wanted—or, perhaps, what we deserved. It's certainly what we accepted, and begs the question: "why wouldn't we accept the same thing again?"


CrowdStrike tools help businesses recover quickly after cyberattack

By leveraging contextual data and technologies like machine learning, security advances like those from CrowdStrike could help cyber professionals more effectively protect their organizations and respond to attacks. The cornerstone of this approach is CrowdStrike's Falcon X. Built on the existing Falcon platform from Crowdstrike, Falcon X is an endpoint solution that combines "malware sandboxing, malware search and threat intelligence into an integrated solution that can perform comprehensive threat analysis in seconds instead of hours or days," according to a press release. According to the Falcon X release, the tool offers indicators of compromise (IOCs) for the threat it comes across in your organization, along with all of its known variants. Additionally, integrated threat intelligence makes it easier for human cybersecurity pros to research and defend against threats. Falcon X is known for five core capabilities: Automated threat analysis of quarantined files, malware search on the CrowdStrike Falcon Search Engine, malware analysis, threat intelligence, and custom-tailored intelligence for your organization, the release said.


The Quirky Secrets of the World’s Greatest Innovators

Innovators are also typically blessed (or cursed) with a deep sense of what psychologists call self-efficacy, which is a nice word for what, in other contexts, might be called hubris: the conviction that one can accomplish whatever one sets one’s mind to. This is crucial because the very nature of breakthrough innovations means that most people will be skeptical of their value. Indeed, most of the people Schilling writes about were, in one sense or another, outsiders in the fields they helped revolutionize. They were also idealists, convinced that they could change the world. As Schilling puts it, “They are willing to pursue an idea even when everybody else says it’s crazy precisely because they don’t need the affirmation of others — they believe they are right even if you don’t agree.” It was that sense of self-efficacy that allowed Elon Musk to believe he could become the first civilian to put rockets into space, and that allowed Dean Kamen to build a wheelchair that could climb stairs, even though everyone told him it was impossible.


Managing Data in Microservices


High-performing organizations with these kinds of requirements have some things to do. The DevOps Handbook features research from Gene Kim, Nicole Forsgren, and others into the difference between high-performing organizations and lower-performing ones. Higher-performing organizations both move faster and are more stable. You don't have to make a choice between speed and stability — you can have both. The higher-performing organizations are doing multiple deploys a day, versus maybe one per month, and have a latency of less than an hour between committing code to the source control and to deployment, while in other organizations that might take a week. That's the speed side. On the stability side, high-performing organizations recover from failure in an hour, versus maybe a day in a lower-performing organization. And the rate of failures is lower. The frequency of a high-performing organization deploying, having it not go well, and having to roll back the deployment approaches zero, but slower organizations might have to do this half the time. This is a big difference.


Can the Law Stop Ransomware?

Cybersecurity experts and legal scholars contend that the best approach is preparation: following best practices such as regularly backing up data, educating employees about threats and risks and maintaining robust firewalls. That approach, however, has continued to lag, with cash-strapped cities and states often still unable to afford or simply unwilling to make the costly systems upgrades frequently needed to seal off vulnerabilities. Atlanta Mayor Keisha Lance Bottoms, for example, acknowledged to The New York Times that cybersecurity had not been a priority until the city was attacked. "Cybersecurity, it's something that is abstract, it's invisible, so in politics it's difficult to say, 'OK, we're going to spend $10 million on cybersecurity,'" says Cesar Cerrudo, chief technology officer of IOActive Labs. ... That's created the surreal scenario of city councils, state governments and even police departments agreeing to pay ransoms simply to get their stuff back. Indeed attackers deliberately set the ransoms low enough that the risk of losing the files altogether – or the expense of hiring a security firm to try to recover them – simply isn't worth it.



Quote for the day:


"Behind every beautiful thing, there's been some kind of pain." -- Bob Dylan


Daily Tech Digest - April 15, 2018

AI and machine learning are forcing CIOs to rethink IT strategies
Machine learning can be the IT pro’s best friend; they just need to realize how it can be used to make their jobs easier." This makes sense because the use of machine learning will be a “crawl-walk-run” for most organizations, as they will apply it in phases. The first phase will be using it to describe something. It analyzes the data and helps interpret it. The next phase is more cognitive where the AI can start to solve problems. The third phase will see the technology start to predict things. For example, it could perhaps predict that a security breach is going to occur based on other data.  The last phase, and we are years away from this, is prescriptive where the AI is able to predict things and then take action to remediate the action. In the previous example, it could not only predict a breach, but it could then take the necessary steps to ensure it doesn’t happen. For this to occur, the AI would use itself in an iterative manner.


Machine learning & language complexity: why chatbots can’t talk… yet

Most of the value of deep learning today is in narrow domains where you can get a lot of data. Here’s one example of something it cannot do: have a meaningful conversation. There are demos, and if you cherry-pick the conversation, it looks like it’s having a meaningful conversation, but if you actually try it yourself, it quickly goes off the rails. In fact, anything that’s a bit too much open domain is beyond what we can currently do. Instead, in the meantime, we can use these systems to assist human workers who then offer and correct their responses. That’s much more feasible. When they interact with others, people tend to express the same intent with different words, potentially over several sentences with different word orders. Talking to chatbots can sometimes be challenging — current chatbot solutions don’t allow diversity. Therefore, you’d better format your dialogue in order to be understood. This is frustrating.


The Cold Start Problem with AI


Any company either startup or enterprise, who wants to take advantage of AI, needs to ensure that they have actual useful data to start with. Where some companies might suffice with simple log data that is generated by their application or website, a company that wants to be able to use AI to enhance their business/products/services, should ensure that the data that they are collecting is the right type of data. Dependent on the industry and business you are in, the right type of data can be log data, transactional data, either numerical or categorical, it is up to the person working with the data to decide what that needs to be. Besides collecting the right data, another big step is ensuring that the data that you work with is correct. Meaning that the data is an actual representative of what happened. If I want a count of all the Payment Transactions, I need to know what is the definition of a Transaction, is it an Initiated Transaction or a Processed Transaction? Once I have answered that question and ensured that the organization agrees on it, can I use it to work with.


How Blockchain Will Change the Sharing Economy

Think of how the sharing economy has exploded in the past decade. If you’ve taken an Uber to the airport or rented an Airbnb, you’ve been a part of it. We’re even at a point where renting out personal items is a viable business model. For example, Omni Storage stores items you’re not using — just like a normal storage company — but they also rent your items out to people. Skis, guitar, winter jacket. It’s all available for rent (with the owner’s permission) via an app. We all hold onto certain possessions, because we plan to use them eventually. Or so we tell ourselves. Why not make some money off of our stuff instead of letting it go unused? That question is at the heart of the sharing economy, and we’re going to be hearing a lot more about businesses like Omni in the next few years. This is what it can look like if blockchain is involved. Futuristic sharing concepts will only work if many other considerations are taken care of. Each item has to be documented, proven authentic, assigned a current value, and even insured. And blockchain can be extremely useful here.


Will artificial intelligence make you a better leader?

Leading with inner agility
In our experience, AI can be a huge help to the leader who’s trying to become more inwardly agile and foster creative approaches to transformation. When a CEO puts AI to work on the toughest and most complex strategic challenges, he or she must rely on the same set of practices that build personal inner agility. Sending AI out into the mass of complexity, without knowing in advance what it will come back with, the CEO is embracing the discovery of original, unexpected, and breakthrough ideas. This is a way to test and finally move on from long-held beliefs and prejudices about their organization, and to radically reframe the questions in order to find entirely new kinds of solutions. And the best thing about AI solutions is that they can be tested. AI creates its own empirical feedback loop that allows you to think of your company as an experimental science lab for transformation and performance improvement. In other words, the hard science of AI can be just what you need to ask the kind of broad questions that lay the foundation for meaningful progress.


200,000 Cisco Network Switches Reportedly Hacked

In its blog post, Kaspersky Lab states: "It seems that there's a bot that is searching for vulnerable Cisco switches via the IoT search engine Shodan and exploiting the vulnerability in them (or, perhaps, it might be using Cisco's own utility that is designed to search for vulnerable switches). Once it finds a vulnerable switch, it exploits the Smart Install Client, rewrites the configuration and thus takes another segment of the Internet down. That results in some data centers being unavailable, and that, in turn, results in some popular sites being down." In an advisory on Cisco switch vulnerability issued Monday, the Indian Computer Emergency Response Team stated multiple vulnerabilities have been reported in Cisco IOS XE ,which could be exploited by a remote attacker to send a crafted packet to an affected device and gain full control also conduct denial of service condition.


Cyber Accountability: The Next Boardroom Struggle

Cyber Accountability: The Next Boardroom Struggle
The data protection officer will also need the right tools in place to monitor irregularities and work with the CISO network team. Real-time analysis at the network level will give businesses an indication of the files or data that have been transferred or viewed from the network environment. This will support any breach reporting and give an organisation the means to handle the reputational aspect of a breach fallout, and rapidly understand what data has been accessed and how to respond. The next key part of the puzzle is for a business to have a slick process for reporting and communicating breaches to the regulator, customers and any other affected parties. Practice is the only way to prepare: define a process, rehearse it in simulations with the required decision makers, refine it, and repeat as the business and regulatory environment shifts, year on year. Complement this with a clear and defined internal procedure so all staff know what to do should and who they need to speak to if they notice something awry.


Graph databases and machine learning will revolutionize MDM strategies

Traditional MDM has been around since the early 2000’s. As data volume has grown and the potential value of analytics has exploded, enterprises seeking to compete on analytics struggle to scale mastering efforts with the surfeit of available data sources. Clearly, creating robust data engineering pipelines to unify this data at scale is more important -- and harder -- than ever. An “agile” approach, utilizing machine learning can cut time required for unification or analytics projects (around 90%) while scaling to more sources than other traditional approaches. Moreover, given the scale of enterprise data, automation is the key to agility and scale. Such enterprise data automation can only be achieved with some human oversight to make sure the results are fast and accurate. Not just raw data scalability, but also human process scalability is enabled by machine learning.


Managing vulnerabilities in the cloud . . . and everywhere else

Managing vulnerabilities in the cloud . . . and everywhere else
The public and, more importantly, governmental leaders are loosing patience with companies that fall victim to attacks because they didn’t address known vulnerabilities with available patches and highly publicized exploits. Aside from how dangerous the leaked NSA–developed exploits can be in the hands of cybercriminals, attacks like WannaCry showed us how connected we are. The “ransomworm” spread like wildfire through networks and jumped into new areas through third–party connections. Where there was a path, there was a way. This should be of concern, especially amid the move to the cloud where complexity and visibility challenges only become more daunting.  To stay safe in the era of distributed attacks and cloud–first strategies, organisations need to rethink how they view their attack surface. Attackers don’t see your network with distinct boundaries, and neither can you. No matter if it’s your physical, virtual or cloud network — you need to approach security holistically and centralize management.


IT’s worst addictions (and how to cure them)

IT̢۪s worst addictions (and how to cure them)
While technology addiction is a real thing, especially for teenagers, IT pros have their own monkeys on their backs. Whether you're an infrastructure junkie or a Slack head, chasing the data dragon or mesmerized by the blinking lights on your network operations center dashboard, your tech addictions can kill productivity, sap budgets and stall innovation. An inability to relinquish control can lead to technology silos and turf wars. Overdependence on artificial intelligence can actually hurt, not help, your company. And while everyone loves shiny new toys, they may not be the most cost-effective solutions for your organization. The first step on the road to recovery is admitting you have a problem. The next step is reading our prescriptions for how to kick your bad habits and get clean again. "Organizations are caught in analysis paralysis," says Sarah Kampman, vice president of product at Square Root, whose CoEfficient SaaS platform helps retailers and automotive brands make sense of their data. "The information isn't translating into behavioral changes that drive success."



Quote for the day:


"Don’t be so quick to label something as “bad.” It may be the thing that takes you to success." -- Tim Fargo


Daily Tech Digest - April 14, 2018

10 best practices for bolstering security and increasing ROI

business meeting (marvent/Shutterstock.com)
There is nothing easy about government security operations. Most chief information security officers struggle with countless manual processes, numerous disconnected tools and a shortage of properly trained talent. However, the increasing number of widely publicized breaches, the growing complexity of regulations pertaining to the protection of personal data and the uptick in ransomware attacks have made agencies' top executives acutely aware of the dangers posed by inadequate security. As a result, those executives have become more willing to approve spending on security operations -- but the increased spending has often not provided the type of returns that agency leaders and financial officers expected. So now CISOs are being asked to provide proof that the money spent -- or that they are asking to be spent -- will lead to greater effectiveness, more efficient operations or better results when the agency is attacked.  If calculating the return on security investments could mirror predicted ROI for return on a new piece of equipment, the task would be simpler.


Use case categories for IoT and blockchain or distributed ledger convergence according to Kaleido Insights' report “The Internet of Trusted Things: Blockchain as the Foundation for Autonomous Products & Ecosystem Services” - via Jessica Groopman - source and more info
In an age where consumers (also part of the ecosystem) want their goods faster than ever and the pressure for digital supply chains to be more interconnected and efficient is high. With interconnected we don’t mean supply chains as such (they are of course interconnected) but the data flows, processes, control mechanisms, myriad stakeholders, processes, actions and interactions (as mentioned in a blockchain smart port case there are over 30 different parties with on average 200 interactions between them just to get a container from one point or the other in shipping). No wonder that global logistics and transportation are among the fastest movers in testing and adopting blockchain technology. It is no different in the intersections of IoT and distributed ledgers from the ecosystem perspective as the Kaleido Insights report depicts it: supply chain is one of five blockchain IoT use case or rather industries/activities where blockchain is seen as the foundation for autonomous products and ecosystem services.



Five Reasons B2B Companies Should Not Ignore Social Media

One of the first things you may hear when you talk to B2B sales managers is that this is a purely rational market -- decisions are pragmatic and solely based on numbers, cash flow and revenue projection. This is just not true. While considerable weight is put into the rational part of the purchase, decision-makers are often filled with emotions when it comes to selecting a provider. The result of these selections could affect their jobs, their careers and their overall stability. Nobody wants to be the person who bought that customer relationship management (CRM) software that was a complete headache and generated huge losses. These emotions need to be addressed with a more humane approach, and social media gives us the ability to have a closer connection to the buyer, not the entity. Examples of this are using videos of client testimonials that feel real and not scripted and introducing your employees to your social audience to start the conversation in the digital environment, thereby creating closer relationships offline.


Understanding Virtual Private Networks [and why VPNs are important to SD-WAN]

istock 501503746
Another type of site-to-site VPN is connecting to a carrier-provided MPLS cloudinstead of the public internet, offloading establishment of the VPN connections to the provider. The service provider creates virtual connections between sites across its MPLS network. The primary advantages of this type of VPN are network agility and the ability to mesh the network. In a typical site-to-site network, each branch is connected to the data center, and any branch to branch traffic flows through that central hub. With meshing, branches connect to each other directly without going through the hub. This direct connectivity may be necessary for video conferencing and other bandwidth-intensive and delay sensitive applications, and MPLS VPNs are ideally suited for this use case. The negative to MPLS VPNs has always been cost. Private IP services, like MPLS are very expensive, particularly for international connections. SD-WANs have been red hot of late with network professionals because they provide the cost benefits of Internet based VPNs with the performance and agility of MPLS VPNs.


Think of a Number: Why Humans and Machines Are Bad at Being Random

Choosing truly random numbers is a challenge for people just as much as it is for machines. It also largely outside the control of the lottery organizers and has some potentially dangerous financial ramifications. For example, in the 1990s, the UK lottery paid a guaranteed £10 for picking three correct numbers. If enough people played the same correct numbers and won, would this bankrupt the lottery? That idea almost became reality with an experiment on the 1997 UK television series The Paranormal World of Paul McKenna. In an experiment in so-called psychokinesis — the power of mind over matter — McKenna asked the nation to concentrate on a particular set of six numbers in the hope of making them come up in the Saturday draw. It quickly became apparent that if millions of viewers bet on them, and even if only three of those numbers came up for the minimum £10 prize, the National Lottery would have to make a huge and unprecedented payout. Luckily for the organizers, the numbers didn’t win, but it led to Camelot changing the rules shortly after the McKenna program, so that four, not three, matching numbers were now required to win the minimum prize.


22 European countries sign up to blockchain partnership


The document said that by cooperating closely, creating a “European ecosystem for blockchain services and applications”, fragmented approaches can be avoided. “It can enable the development of interoperable frameworks for blockchain in Europe based on standardised solutions and governance models,” the declaration said. “Such cooperation can also strengthen compliance with regulations and regulatory convergence, which is essential to support scalability of such solutions across borders.” Earlier this year, the EC set up the Blockchain Observatory and Forum, which will monitor blockchain developments and ensure EU organisations are engaged in the industry.  ... “Such services will create opportunities to enhance services in both public and private sectors, notably making better use of public sector information while preserving data integrity, and providing better control of data by citizens and organisations interacting with public administrations, reducing fraud, improving recordkeeping, access, transparency and auditability, within and across borders.”


Hackers Found Using A New Code Injection Technique to Evade Detection

While performing in-depth analysis of various malware samples, security researchers at Cyberbit found a new code injection technique, dubbed Early Bird, being used by at least three different sophisticated malware that helped attackers evade detection. As its name suggests, Early Bird is a "simple yet powerful" technique that allows attackers to inject malicious code into a legitimate process before its main thread starts, and thereby avoids detection by Windows hook engines used by most anti-malware products. The Early Bird code injection technique "loads the malicious code in a very early stage of thread initialization, before many security products place their hooks—which allows the malware to perform its malicious actions without being detected," the researchers said. The technique is similar to the AtomBombing code injection technique that does not rely on easy-to-detect API calls, allowing malware to inject code into processes in a manner that no anti-malware tools can detect.


Transform HR with Intelligent Automation

Intelligent Automation
Automation is everywhere. As the “new norm,” intelligent automation plays a key role in our everyday lives. From Lyft providing instant access to a ride and Alexa keeping track of the grocery list, intelligent automation is behind the scenes enhancing services and making our lives easier. With less and less effort required on our part, intelligent automation is quickly becoming the preferred way of doing things. While the world moves towards increased automation, many companies and HR organizations in particular are striving to follow. However, many HR organizations still rely on manual and unstructured work processes. In a recent study of more than 1,850 business leaders, HR was named the department “most in need of a reboot” with only 37 percent of services being automated. This makes it not only difficult for HR teams to do their jobs effectively, but also reflects on the department overall and what employees experience when interacting with HR. The lack of automation in HR organizations is especially challenging given the dramatic rise in work loads in recent years.


Artificial Intelligence - The Next Digital Frontier?

In this independent discussion paper, we examine investment in artificial intelligence (AI), describe how it is being deployed by companies that have started to use these technologies across sectors, and aim to explore its potential to become a major business disrupter. To do this, we looked at AI through several lenses. We analyzed the total investment landscape bringing together both investment of large corporations and funding from venture capital and private equity funds. We also reviewed the portfolio plays of major internet companies, the dynamics in AI ecosystems from Shenzhen to New York, and a wide range of case studies. As part of our primary research, we surveyed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals. This report also leverages the resources of McKinsey Analytics, a global practice that helps clients achieve better performance through data. The research was conducted jointly with Digital McKinsey, a global practice that designs and implements digital transformations.


Technology Must Make Digital Banking More Human

There needs to be an extreme sense of urgency within the organization to accept and implement change, otherwise the organization will see the ‘doomsday’ industry scenario that we’ve been warned about. As a result, it is important to find deeply transformational levers – changes that will fundamentally make a collective difference faster than a general employee engagement program or a sheer reorganization exercise. That’s why every Emotional Banking workshop and program designed corresponds to one of four key levers … Knowledge, Passion, Courage and Language. Because the connection to brands differ from bank to bank, the process of building a successful emotional connection between the organization and the customer must be flexible. Emotional Banking is an evolutionary process, that will change over time and will be customized for each organization. I have enlisted a global deployment partner as well as Dr. Julia Furedi to build learnings around what works best and fastest, and to help demonstrate the synergy between the four levers and the technology now available at leading banks.



Quote for the day:



"The most important thing in communication is to hear what isn't being said." -- Peter Drucker


Daily Tech Digest - April 13, 2018

What is hybrid cloud really, and what's the best strategy?

cloud computing business services
“Hybrid cloud is a cloud computing environment that uses a mix of private cloud and public cloud services with orchestration between the platforms allowing data and applications to be shared between them,” says Ritu Jyoti, research director on IDC's enterprise storage, server and infrastructure software team. An enterprise might be testing an app on public cloud or private cloud and running actual production on a different cloud, Jyoti says. “Or they’re running backup and recovery on public cloud but actual production on private cloud. Or they’re running the actual production application on a hosted private cloud but moving data on public cloud to run some analytics on top of it. There’s a separation of use case, but it’s essentially for the same workload.”  Some 56 percent of organizations surveyed by IDC use private cloud and public cloud for the same workload today, and another 8 percent say they use one type of cloud for production and another type for testing, development, backup or analytics – which is also considered a hybrid-cloud strategy.



How to mess up your agile transformation in seven easy (mis)steps

How to mess up your agile transformation in seven easy (mis)steps
Agile principles have been one of the key drivers of Silicon Valley’s ability to innovate, learn, and adapt rapidly. Agile started as a set of principles for software development to write and release code iteratively without waiting for months (or years) to release functionality. The term “agile” has now expanded to many facets of solution development with the same underlying principles—develop iteratively, release frequently, focus on the customer, and collaborate through a cross-functional team—always prioritizing test-and-learn methods over detailed planning. Beyond solution development, we are designing and implementing enterprise-wide operating models based on these principles. While many traditional heavyweights have embarked on agile transformations, most have faced real challenges in achieving their desired objectives. Based on our experience across numerous transformations, we see the following as common missteps on an agile journey.


Rely on the DevOps methodology to avoid disaster


First, organizations must help development teams understand how application changes affect both operations and customers. They should begin to see they have a vested interest in the success of the entire process. DevOps is not about ops controlling dev, but about both groups taking a stake in the entire process. This won't be easy, but solid communication from management and incentives based on overall success -- as opposed to development-specific incentives or goals tied to speed -- help all teams focus on the needs of the business and customer. All this sounds ideal on paper, but people don't always buy in, or they quickly revert to old ways. It will be up to management to avoid simply handing down a DevOps mandate and then walking away. The DevOps process must be reinforced at every turn as a positive method for both the company and the people on the ground making it happen. Everyone needs to take ownership. Any DevOps journey will experience peaks and valleys, but, if you stick through failures, the benefits can yield impressive results.


How are Robo-Advisors Changing the Finance Industry?


They aim to make the investment process more straightforward and affordable. Indeed, one of their signature features is that they plug a gap for those who are unable to meet the minimum investment balance threshold for traditional advisors, which tends to be upwards of $200,000 if not more. In contrast, robo-advisors can apply a professional service to a portfolio valued for as little as $5,000 — and often even less. And the associated management fees are also substantially lower. Firms generally charge under 1.0% of assets per year, while some like WiseBanyan don’t even charge an expense ratio. The UK Treasury and Financial Conduct Authority call this gap the “advice gap”. In the words of Treasury Committee member Mark Garnier, “As we move into an increasingly digital age, it is inevitable that the traditional financial advisor will be available in a robotic form. This is not a bad thing as it will make standardised advice available to everyone cheaply.” This explains why robos are proving especially popular among the younger, less affluent and less experienced investment crowd at present.


Take a Business-Centric Approach to IT


While it is true that we do have certain procedures for data and systems we know is critical to the company’s business, I think we tend not to review those as often as we probably should. So, it stands to reason that if you want to view the business’ digital assets in a way that reflects the various levels of criticality, then you should conduct some type of review of these assets to understand the value each hold for the company. In short, this is a Business Impact Analysis. What does it mean to the company when X service or application is unavailable? What does it cost when it is unavailable for an hour, two hours, three hours or more? If those questions cannot be answered, then it starts to become very difficult to truly prioritize during a recovery and more importantly, the frequency of the protection schedule based on those answers. When you embark on this journey, you may quickly find that some of the questions you are about to ask of the business unit manager(s) may be more difficult for them to answer than it is for you to ask. I call this the “Storm before the Calm”, and it was the title of my consulting document I used to train my clients on how to conduct a BIA.


Everything you need to know about Apple's GDPR privacy upgrade

Apple, iOS, macOS, iPhone, privacy, GDPR, AI, data analytics
Effectively this means Apple’s products are private by design, which should open up interesting opportunities for the company in future. These aren’t the only privacy enhancements we can look forward from Apple as it prepares for GDPR rules to become mandatory in May. The company has said it plans to update its Apple ID management page with a way to let users download a copy of all the data they have stored with the company. The company will allow users to download data across individual apps, which means data concerning your music playback choices or which news stories you’ve been reading will be as easy to download as information about the contents of your Contacts or Calendar apps.  That’s the equivalent of the data Facebook allows its users to download about themselves, though Apple’s pre-existing commitment to privacy means we don’t expect too many unpleasant surprises – though I will be paying particular attention to Location data logs when I check my records. Apple also intends making it much easier for its customers to control their data.


How to Create a Meeting-Smart Work Culture

How to Create a Meeting-Smart Work Culture
It's common for companies to have standing meetings for status updates and team bonding, and it's even more common for people to put off inconvenient discussions by suggesting, "Let's schedule a meeting to talk about it next week" or "Why don't we call a meeting to figure out next steps?" While their intentions are noble, their results wreak silent havoc. That $37 billion that's lost each year to U.S. businesses could be spent on furthering their visions. Instead, that money is sunk in meetings that drain team members' energy, lower morale, lead to bad ideas, and waste time -- all of which hurts a business. And the pain is more severe when you consider that 50 percent of high-level managers' time is spent in meetings. That means that your most valuable -- and expensive -- team members are wasting the most time. ... Training your team to treat meetings as time- and money-intensive endeavors will result in better outcomes for everyone. The first step to declaring war is to assess where your business currently stands on meetings. How many of each type of meeting are you having?


New cloud threats as attackers embrace the power of cloud


Data shows a rise in the number of attackers that consume public cloud services to host command-and-control servers for IoT botnets and ransomware. In January, the Spamhaus Project, a nonprofit based in Geneva, released its 2017 Botnet Threat Report. Researchers at Spamhaus Malware Labs identified more than 9,500 botnet command-and-control servers on 1,122 different networks. Botnet controllers, according to Spamhaus' block listings, increased 32% in 2017, and that data does not include controllers hosted on the dark web, where servers can't be identified. "What stands out in 2017 is the dramatic increase of botnet controllers hosted at cloud providers," the researchers stated. Large botnet operators are cloud threats, deploying botnet controllers in public clouds such as Amazon Web Services and Google Cloud Platform (Compute Engine) using fraudulent signups. "While some of the cloud providers managed to deal with the increase of fraudulent signups, others are obviously still struggling with the problem," researchers said.


One in five serverless apps has a critical security vulnerability

One in five serverless apps has a critical security vulnerability
According to the audit of more than 1,000 apps by Israeli security firm PureSec, most vulnerabilities and weaknesses were caused by copying and pasting insecure sample code into real-world projects, poor development practices, and lack of serverless education. This is the kind of bad behavior you really don’t expect to see from professional developers. Additionally, the company found 6 percent of the projects had application secrets, such as API keys or credentials, posted in their publicly accessible code repositories. PureSec looked at apps written in a variety of popular languages — Java, Python, Go, and NodeJS — and found all of them were within a few percentage points, around 20 percent each. The exception was Microsoft’s .Net, where the group found 42.9 percent of serverless apps had some kind of vulnerability. Perhaps not surprisingly, this news comes as PureSec announced a product to secure serverless applications. It has launched a beta version of its PureSec SSRE platform for AWS Lambda, which can defend against application layer attacks 


AI is a great example of a technology that, when applied to cybersecurity, can smartly advance IT efficiency and security, particularly for those enterprises that are constrained by time and resources. There are clear advantages for companies that choose to utilize AI in their cybersecurity efforts, as I discussed in a previous Forbes article. But what we have not fully considered is how this type of technology innovation may introduce new areas of exposure that hackers can use to their advantage. The more we innovate in cybersecurity, the more fuel to the proverbial fire we may be providing to cybercriminals. Consider this: While it may be incredibly easy for a hacker to use social engineering to trick someone into clicking on a link or giving up their login credentials, it is just as easy, if not easier for a hacker to use AI to their advantage. A recent study found that when deploying a phishing scheme against humans, it was not the hacker who had the higher click-through rate but actually the artificial hacker who succeeded more often in converting those malicious click-throughs into successful phishing attacks.



Quote for the day:


"A happy life consists not in the absence, but in the mastery of hardships." -- Helen Keller


Daily Tech Digest - April 12, 2018

The Role Of Big Data And Mobile Apps In Healthcare


The use of sensors to monitor everything – from whether a patient took the right dose of medication at the right time, to whether their insulin levels are in check – is one of the big growth areas in the Internet of Things (IoT). Combining this with the global adoption of mobile devices, especially smartphones and wearables, means our health can be monitored on a continuous and proactive basis by AI. If a problem is detected, a healthcare professional can be alerted to take appropriate action. While the benefits of such constant surveillance of one’s health, especially in cases of patients with chronic or life-threatening illnesses, are obvious, they also bring added risk. Where there is a sensor, there is vulnerability and the risk of cyberattacks. Recently, Johnson and Johnson warned diabetes patients of the possibility of hackers affecting their insulin dosages. Although the probability is low (and possibly would have been lower without the publicity), this does raise fears of attacks on prominent people who need to manage their insulin.


Digital Is About Speed — But It Takes a Long Time

Digital Transformation Speed Long Term
The ability of digital technologies to accelerate business is giving rise to new value propositions — value propositions that use information to eliminate hassles, enhance awareness, and create solutions. Companies succeed in the digital economy by converting meaningful digital value propositions into revenue-generating digital offerings. We define digital offerings as information-enriched customer solutions wrapped in engaging customer experiences. The great irony is that, while new digital offerings are accelerating the pace of business, they only gradually come to fruition. That is because established companies are usually designed for efficient delivery of their existing products. They are not designed as software companies that rapidly build — and just as rapidly change — their customer offerings. Digital demands entirely new approaches to imagining, designing, delivering, and servicing those value propositions. Consequently, organizational transformation cannot be speedy. ... it takes time — and longer than you might think — to identify and then deliver a new digitally inspired value proposition. The experiences of Royal Philips highlight why.


Global Digital Operations 2018 Survey

Distinct from Industry 3.0, which involved the automation of single machines and processes, Industry 4.0 encompasses end-to-end digitization and data integration of the value chain: offering digital products and services, operating connected physical and virtual assets, transforming and integrating all operations and internal activities, building partnerships, and optimizing customer-facing activities. PwC’s Strategy& interviewed 1,155 manufacturing executives in 26 countries to develop an index that ranks companies by digital operations maturity, from Digital Novices, Digital Followers, Digital Innovators to Digital Champions. Based on the study data, we were able to create a sweeping portrait of Digital Champions — companies that have taken digitization to the highest degree — and assess what it takes to be a Digital Champion through the lens of the four essential ecosystems they must master and orchestrate.


Private by design: Why Apple’s iOS meets the needs of healthcare IT

Apple, iOS, health, mhealth, iPad, iPhone
Apple already has industry-leading security and privacy built in, and a proven commitment to software patches means c.80 percent of all actively used Apple devices are running the most recent version of the OS. Apple’s recently introduced Health Records app for iPhone is clear evidence of Apple’s advantage. It makes patient data portable in such a way as to put the patient in complete control of that data. “I think the good thing about the Apple solution is that the data only resides on the end-user’s device,” Mike Restuccia, CIO at Penn Medicine told Computerworld. “So, we don’t have access to that. Apple doesn’t have access to the data. The beauty of the solution is it is patient managed, patient controlled and patient centered.” That’s fine, but with cloud-based data analytics expected to become a big driver for future mobile healthcare innovation, the need for privacy will become even harder to meet. Such solutions will require data leave the device in such a way as to provide sufficient information for real-time insights and positive health interventions, while at the same time retaining total patient privacy.


Change is Good When It’s Free, Easy, and Has a Happy Endpoint

When a company announces a new version of something I rely upon in my personal life, like Apple announcing a new iPhone, I cringe. Of course, the thought of the latest and best is exciting, but the thought of upgrading – the cost, the hassle – usually makes me wait (longer than my teenage son would like). The same can be said for our customers who are happy using McAfee virus protection to secure their environment. As a McAfee senior product marketing manager focused on endpoint security and endpoint detection and response solutions for enterprises, I can tell you that today that’s not enough. Cyberthreats continue to evolve, no matter how much we resist change. We must move to a modern endpoint defense solution that can detect zero-day threats, and stop them in their tracks. Now, wait a minute before you go running for aspirin because you are thinking, “This is going to cost a lot of money! This is going to be complex and hard to manage!” We thought about that. Upgrading may be easier than you think.


Using data to cross the finish line

tying sports shoe 118291894
Two such areas are geospatial and translytical – both emerging disciplines that hold large amounts of data that were previously untapped. Geospatial data analysis brings together GIS, the system of record for maps, and ERP tools to overlay data onto maps for increased visibility and insights on a geographic basis – a particularly useful tool for applications like smart cities looking to improve traffic patterns, public transportation routes, energy grids and emergency response. Translytical data is another emerging data source that has seen tremendous growth of late, though industry leaders are still defining the term. Gartner defines translytical database architectures as hybrid transactional/analytical processing (HTAP), meaning they work to make operational data more analytical. According to a recent Forrester report, the growth of translytical is due to the fact that “more enterprise architecture pros see translytical as critical for their enterprise strategy.”


Examine CORD architecture's benefits and challenges


CORD offers a new business model to achieve the benefits of NFV. It provides an architecture for open hardware and open software that will reduce overall equipment costs, thus lowering Capex. Networking operators can upgrade their network technology without hardware dependency. A vibrant ecosystem of open source software will accelerate innovation and potentially allow service providers to deliver new services to customers more rapidly. A number of suppliers offer CORD-specific products and development efforts, including Radisys, Calix and Adtran. CORD is early in its adoption cycle, with limited production deployment at this time. CORD architecture has a number of variants -- e.g., R-CORD and M-CORD -- and may not coalesce into a standard with enough critical mass to affect the market. Like NFV, CORD needs to attract a broader ecosystem in terms of NFV hardware, software and services suppliers.


How mobile devices in the travel industry can improve customer service with AR

Mobile devices in the travel industry along with other items travelers use
With competition increasing from Airbnb and other vacation rentals, hotels have been focused on winning over customers by creating immersive, exciting experiences. In that way, augmented reality could be a revolutionary technology. Developers have already identified multiple use cases for AR in hotels. Individual rooms can be outfitted with virtual markers that pull up information about housekeeping, room service and other hotel amenities. Lobbies can overlay maps and directions to navigate the hotel and find luggage checks, pools and other attractions. Hotels can also use AR as a supplement to concierge services, such as providing virtual directions for taking public transportation across the city or finding interesting attractions nearby. Because hotels offer a physical space that travelers spend so much time in, each property represents a gold mine of AR markers that can increase engagement and customer satisfaction. Most travelers know what it’s like to look at a historic building and not entirely know what they’re looking at or why it’s important.


Agile Development & Remote Teams - 6 Powerful Productivity Hacks You Should Know


Agile environment is backed by the idea to make clustered or in-house teams more productive. AGILE CULTURE RUNS ON THE PRINCIPLE OF TEAM COLLABORATION, CONTINUOUS LEARNING & PLANNING, AND SEAMLESS INTERACTION. As businesses go global, expand to different geographies, teams become more scattered with talent from less competitive markets working on projects around the clock. Effective collaboration between remote teams using Agile, as one of the new-age development models, has proven to be very effective. However, Agile development and remote teams sometimes have friction and fall into different sorts of challenges such as: Building rapport with team members; Coordinating across time zones; Scheduling meetings when both teams are online together only for a short duration; and Collaboration among different development cultures. To ease this conflicting situation, organizations require a hybrid Agile development approach specifically targeted towards streamlining and powering up remote software development.


11 signs you’re writing great code

11 signs you̢۪re writing great code
Even if you have a modern debugger that you can attach to your runtime and step through your code, the world just does not work that way. Meaning, you code may run somewhere else, it may be serverless, it may be multithreaded or distributed, or it may run on a cloud somewhere. In those environments, it may not behave the same as on your computer. So, you’re going to need a log. And that means you need a logging framework. You need to write the code and set up the logging in a way that makes the log readable or at least digestible with some sort of log reader. You need to make this part of your software going in. If you fail to do so, you end up doing production deployments to deploy logging code to debug a production problem. In other words, your hair is on fire and you are walking into an oil refinery. At long last, write unit tests. This is a red line. Leave any company that traps you in the old “business case for writing unit tests” discussion—chaos and hell is going to follow this discussion.



Quote for the day:


"Success is liking yourself, liking what you do, and liking how you do it. -- Maya Angelou


Daily Tech Digest - April 11, 2018

How GDPR Drives Real-Time Analytics

How GDPR Drives Real-Time Analytics
The regulations apply to organisations that are trading within the EU. However, this potentially includes organisations from every part of the world. The regulations would keep European organisations from working with companies and states that do not meet the requirements of GDPR. The regulation aims to protect the personal data of natural persons, whatever their nationality or place of residence. The regulations have the potential to apply to citizens and businesses from the U.S., Asia, and other parts of the world. EU organisations are bound by the regulation to protect the personal data of anyone from anywhere in the world and not just the EU citizens. Data collectors from outside the EU are also bound to protect the personal data of European citizens as long as it is collected within the European borders. The scope of the term personal data has been expanded in the new legislation. It now encompasses any information relating to an identified or natural person such as their name, location data, identification number, or employment, etc. Personal data also includes the physical, genetic, mental, physiological, economic, cultural, or social identity of that person.



IBM tweaks its z14 mainframe to make it a better physical fit for the data center

IBM Z mainframe
There are other benefits too: The ZR1 specs give it 16U of space free, so storage, networking or monitoring systems can go in the same rack rather than an adjacent cabinet. And it uses standard air cooling and single-phase power, where the original z14 required a three-phase power supply, he said. Alongside the introduction of the ZR1, IBM is also strengthening the platform's logical partitioning capabilities with Secure Service Container technology. Pop an app in a Docker container, and you can lock it down so that the only way of accessing its data once the workload is running is through defined APIs, Jollans said. "The reason for doing that is one of the major threats to enterprises is insider attack.," he said. "You've got encryption, protection against malware, isolation from other partitions and so on, so it provides a very tight, secure environment for running workloads." So far, IBM has been running its cloud blockchain workload in that context but is now offering it for use with generic applications. 


Slack’s Enterprise Grid gets security and compliance enhancements

#slack signage
“This is not like a 500-person company where you can easily send the 10 or so people that start every month to a Slack on-boarding class,” Frank said. “You are talking a 50,000 or 100,000-person company; that is a lot more complicated.”  Raul Castañon-Martinez, senior analyst at 451 Research, said the new features should help enhance Slack’s enterprise-friendliness as deployments of the tool grow in scale. “Slack’s success is closely tied to organic, bottom-up adoption; this means that employees find value in it,” Castañon-Martinez said. “The new features show that Slack is also paying close attention to the other part of the equation: enterprise requirements for management and security. “The new product features Slack’s commitment to continue building an enterprise-grade platform,” he said. Slack has also made changes to its compliance processes and features. It is now possible to create a Custom Terms of Service that all employees must sign before logging in to Slack. Custom Terms of Service can be applied to guest accounts too, which could differ from those provided for staff and require an NDA, for example.


3 key steps for running chaos engineering experiments

3 key steps for running chaos engineering experiments
Chaos engineering is the practice of running thoughtful, planned experiments that teach us how our systems behave in the face of failure. Given the trends around dynamic cloud environments and the rise of microservices, the web continues to grow increasingly complex alongside our dependency on these systems. Making sure failures are mitigated and proactively deterred is more important now than ever. Even brief issues can hurt customer experience and impact a company’s bottom line. The cost of downtime is becoming a major KPI for engineering teams, and when there’s a major outage the cost can be devastating. In 2017, 98 percent of ITIC surveyed organizations said a single hour of downtime would cost their business over $100,000. One major outage could cost a single company millions of dollars. The CEO of British Airways recently revealed that a technological failure that stranded tens of thousands of British Airways passengers in May 2017 cost the company 80 million pounds ($102.19 million USD).


Facebook’s data problems have an upside for banks

Citibank signage outside a branch.
Banks are quick to emphasize how important guarding someone’s data is, and they have long bickered over how to share data outside of the bank’s four walls. In Jamie Dimon’s annual letter to JPMorgan Chase shareholders, the CEO of the bank wrote, “We have consistently warned our customers about privacy issues, which will become increasingly critical for all industries as consumers realize the severity of the problem.” Given Facebook’s woes, a bank brouhaha has already broken out over whether the social media platform’s data breach should slow down the open banking movement. But regardless of the implications for nonbank apps, some see the heightened sensitivity around data sharing as reasons why banks ought to step up the ways they slice and dice consumer information. Certainly, Citi is betting on an opportunity in financial wellness. Its app, which will be available to iPhone users in the coming weeks, will initially rely on word-of-mouth marketing for a service designed to help anybody understand spending patterns, spot recurring bills and open an account in-app if desired.


Bank of America, Harvard form group to promote responsible AI

Bessant pointed out that financial institutions have a big impact on consumers’ lives and therefore a great duty to be responsible in their use of AI when it comes to extending credit, recommending investments and protecting customer funds, data and digital systems. “Because we affect money, whether it’s the movement of money or the investment and return of money or it’s how capital markets work for companies and jobs, I believe we have a monumental responsibility to get it right,” she said. All of these services "have to use models and algorithms and will be at their best when we can use predictive technologies, but we have to make sure that as we capture that growth and do what’s right and great for our customers and clients, that we’re also recognizing the potential pitfalls.” Like any other program, an artificial intelligence program can be subject to “garbage in, garbage out” and drawing false conclusions based on flawed or incomplete data.


Oblivious DNS could protect your internet traffic against snooping

At its most basic level DNS matches website names with their IP addresses, making it a fundamental part of the structure of the internet. Any change to DNS is likely to be met with resistance, the researchers say, making changing it to protect users difficult. It's also simple for a third party, like law enforcement or cyber criminals, to snoop on the personally identifying information that is transmitted to DNS servers. That information includes your IP address, the geographical subnet you are on (and therefore your general location), your MAC address, and the name of the website you want to visit. Those personal details are transmitted in plain text, making intercepting it easy. Internet users also need to have faith in the security of their DNS provider—all the information transmitted can be stored, creating a total profile of the internet use coming from your IP address, or even your particular computer.


Splunk debuts IIoT product for in-depth analytics

Splunk debuts IIoT product for in-depth analytics
“Industry 4.0’s kind of broad – it encompasses customers from transportation, oil and gas, energy and utilities companies,” she said. “These companies are using Splunk enterprise today … we see them using Splunk enterprise to gain insight into their industrial operations.” Splunk is known as a provider of log analysis and infrastructure management tools centered primarily around an expertise with big data analytics. Splunk has enlisted an array of partners to help the company navigate the murky waters of the industrial world, according to Haji. “We’ve invested very heavily in building out a very targeted set of system integrators,” she said. “These are the guys that have deep domain expertise in industrial IoT, and they also have a deep relationship with their customers.” Splunk’s the first major player in the log analysis sector to make a major push into IoT, but it’ll face a brand-new slate of competitors, beyond the Sumo Logics and Logglys of the world.


How to Get Yourself Out of Technical Debt


On a long enough timeline, technical debt creates a lot of misery in the office. Team members tend toward finger pointing and infighting, and a sense of embarrassment pervades. Nobody likes explaining over and over again to stakeholders that seemingly simple changes are actually really hard. So you might just take a breath one day and ask yourself if life isn't too short to keep coming in every day and gingerly massaging some 20-year-old, battleship-gray Winforms app into shape. Maybe it's time to move on to greener and more satisfying pastures. Now, bear in mind that I'm not advocating that you quit your job every time the team makes a technical decision you don't agree with. I'm talking about a situation that feels like a true dead end and where you can feel your market worth slipping day over day. It's not a decision to make lightly, but you should understand that crushing technical debt isn't something you have to tolerate indefinitely, either.


Polyglot Persistence Powering Microservices

Netflix has to look at the user authorization and licensing for the content. Netflix has a network of Open Connect Appliances (OCAs) spread all over the world. These OCAs are where Netflix stores the video bits, and the sole purpose of these appliances is to deliver the bits as quickly and efficiently as possible to your devices while we have an Amazon plane that handles the microservices and data-persistence store. This service is the one responsible for generating the URL, and from there, we can stream the movie to you. The very first requirement for this service is to be highly available. We don't want any user experience to be compromised when you are trying to watch a movie, say, so high availability was priority number one. Next, we want tiny read and write latencies, less than one millisecond, because this service lies in the middle of the path of streaming, and we want the movie to play for you the moment you click play. We also want high throughput per node. Although the files are pre-positioned in all of these caches, they can change based on the cache held or when Netflix introduces new movies — there are multiple dimensions along which these movie files can change.



Quote for the day:


"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy