Daily Tech Digest - April 11, 2018

How GDPR Drives Real-Time Analytics

How GDPR Drives Real-Time Analytics
The regulations apply to organisations that are trading within the EU. However, this potentially includes organisations from every part of the world. The regulations would keep European organisations from working with companies and states that do not meet the requirements of GDPR. The regulation aims to protect the personal data of natural persons, whatever their nationality or place of residence. The regulations have the potential to apply to citizens and businesses from the U.S., Asia, and other parts of the world. EU organisations are bound by the regulation to protect the personal data of anyone from anywhere in the world and not just the EU citizens. Data collectors from outside the EU are also bound to protect the personal data of European citizens as long as it is collected within the European borders. The scope of the term personal data has been expanded in the new legislation. It now encompasses any information relating to an identified or natural person such as their name, location data, identification number, or employment, etc. Personal data also includes the physical, genetic, mental, physiological, economic, cultural, or social identity of that person.



IBM tweaks its z14 mainframe to make it a better physical fit for the data center

IBM Z mainframe
There are other benefits too: The ZR1 specs give it 16U of space free, so storage, networking or monitoring systems can go in the same rack rather than an adjacent cabinet. And it uses standard air cooling and single-phase power, where the original z14 required a three-phase power supply, he said. Alongside the introduction of the ZR1, IBM is also strengthening the platform's logical partitioning capabilities with Secure Service Container technology. Pop an app in a Docker container, and you can lock it down so that the only way of accessing its data once the workload is running is through defined APIs, Jollans said. "The reason for doing that is one of the major threats to enterprises is insider attack.," he said. "You've got encryption, protection against malware, isolation from other partitions and so on, so it provides a very tight, secure environment for running workloads." So far, IBM has been running its cloud blockchain workload in that context but is now offering it for use with generic applications. 


Slack’s Enterprise Grid gets security and compliance enhancements

#slack signage
“This is not like a 500-person company where you can easily send the 10 or so people that start every month to a Slack on-boarding class,” Frank said. “You are talking a 50,000 or 100,000-person company; that is a lot more complicated.”  Raul Castañon-Martinez, senior analyst at 451 Research, said the new features should help enhance Slack’s enterprise-friendliness as deployments of the tool grow in scale. “Slack’s success is closely tied to organic, bottom-up adoption; this means that employees find value in it,” Castañon-Martinez said. “The new features show that Slack is also paying close attention to the other part of the equation: enterprise requirements for management and security. “The new product features Slack’s commitment to continue building an enterprise-grade platform,” he said. Slack has also made changes to its compliance processes and features. It is now possible to create a Custom Terms of Service that all employees must sign before logging in to Slack. Custom Terms of Service can be applied to guest accounts too, which could differ from those provided for staff and require an NDA, for example.


3 key steps for running chaos engineering experiments

3 key steps for running chaos engineering experiments
Chaos engineering is the practice of running thoughtful, planned experiments that teach us how our systems behave in the face of failure. Given the trends around dynamic cloud environments and the rise of microservices, the web continues to grow increasingly complex alongside our dependency on these systems. Making sure failures are mitigated and proactively deterred is more important now than ever. Even brief issues can hurt customer experience and impact a company’s bottom line. The cost of downtime is becoming a major KPI for engineering teams, and when there’s a major outage the cost can be devastating. In 2017, 98 percent of ITIC surveyed organizations said a single hour of downtime would cost their business over $100,000. One major outage could cost a single company millions of dollars. The CEO of British Airways recently revealed that a technological failure that stranded tens of thousands of British Airways passengers in May 2017 cost the company 80 million pounds ($102.19 million USD).


Facebook’s data problems have an upside for banks

Citibank signage outside a branch.
Banks are quick to emphasize how important guarding someone’s data is, and they have long bickered over how to share data outside of the bank’s four walls. In Jamie Dimon’s annual letter to JPMorgan Chase shareholders, the CEO of the bank wrote, “We have consistently warned our customers about privacy issues, which will become increasingly critical for all industries as consumers realize the severity of the problem.” Given Facebook’s woes, a bank brouhaha has already broken out over whether the social media platform’s data breach should slow down the open banking movement. But regardless of the implications for nonbank apps, some see the heightened sensitivity around data sharing as reasons why banks ought to step up the ways they slice and dice consumer information. Certainly, Citi is betting on an opportunity in financial wellness. Its app, which will be available to iPhone users in the coming weeks, will initially rely on word-of-mouth marketing for a service designed to help anybody understand spending patterns, spot recurring bills and open an account in-app if desired.


Bank of America, Harvard form group to promote responsible AI

Bessant pointed out that financial institutions have a big impact on consumers’ lives and therefore a great duty to be responsible in their use of AI when it comes to extending credit, recommending investments and protecting customer funds, data and digital systems. “Because we affect money, whether it’s the movement of money or the investment and return of money or it’s how capital markets work for companies and jobs, I believe we have a monumental responsibility to get it right,” she said. All of these services "have to use models and algorithms and will be at their best when we can use predictive technologies, but we have to make sure that as we capture that growth and do what’s right and great for our customers and clients, that we’re also recognizing the potential pitfalls.” Like any other program, an artificial intelligence program can be subject to “garbage in, garbage out” and drawing false conclusions based on flawed or incomplete data.


Oblivious DNS could protect your internet traffic against snooping

At its most basic level DNS matches website names with their IP addresses, making it a fundamental part of the structure of the internet. Any change to DNS is likely to be met with resistance, the researchers say, making changing it to protect users difficult. It's also simple for a third party, like law enforcement or cyber criminals, to snoop on the personally identifying information that is transmitted to DNS servers. That information includes your IP address, the geographical subnet you are on (and therefore your general location), your MAC address, and the name of the website you want to visit. Those personal details are transmitted in plain text, making intercepting it easy. Internet users also need to have faith in the security of their DNS provider—all the information transmitted can be stored, creating a total profile of the internet use coming from your IP address, or even your particular computer.


Splunk debuts IIoT product for in-depth analytics

Splunk debuts IIoT product for in-depth analytics
“Industry 4.0’s kind of broad – it encompasses customers from transportation, oil and gas, energy and utilities companies,” she said. “These companies are using Splunk enterprise today … we see them using Splunk enterprise to gain insight into their industrial operations.” Splunk is known as a provider of log analysis and infrastructure management tools centered primarily around an expertise with big data analytics. Splunk has enlisted an array of partners to help the company navigate the murky waters of the industrial world, according to Haji. “We’ve invested very heavily in building out a very targeted set of system integrators,” she said. “These are the guys that have deep domain expertise in industrial IoT, and they also have a deep relationship with their customers.” Splunk’s the first major player in the log analysis sector to make a major push into IoT, but it’ll face a brand-new slate of competitors, beyond the Sumo Logics and Logglys of the world.


How to Get Yourself Out of Technical Debt


On a long enough timeline, technical debt creates a lot of misery in the office. Team members tend toward finger pointing and infighting, and a sense of embarrassment pervades. Nobody likes explaining over and over again to stakeholders that seemingly simple changes are actually really hard. So you might just take a breath one day and ask yourself if life isn't too short to keep coming in every day and gingerly massaging some 20-year-old, battleship-gray Winforms app into shape. Maybe it's time to move on to greener and more satisfying pastures. Now, bear in mind that I'm not advocating that you quit your job every time the team makes a technical decision you don't agree with. I'm talking about a situation that feels like a true dead end and where you can feel your market worth slipping day over day. It's not a decision to make lightly, but you should understand that crushing technical debt isn't something you have to tolerate indefinitely, either.


Polyglot Persistence Powering Microservices

Netflix has to look at the user authorization and licensing for the content. Netflix has a network of Open Connect Appliances (OCAs) spread all over the world. These OCAs are where Netflix stores the video bits, and the sole purpose of these appliances is to deliver the bits as quickly and efficiently as possible to your devices while we have an Amazon plane that handles the microservices and data-persistence store. This service is the one responsible for generating the URL, and from there, we can stream the movie to you. The very first requirement for this service is to be highly available. We don't want any user experience to be compromised when you are trying to watch a movie, say, so high availability was priority number one. Next, we want tiny read and write latencies, less than one millisecond, because this service lies in the middle of the path of streaming, and we want the movie to play for you the moment you click play. We also want high throughput per node. Although the files are pre-positioned in all of these caches, they can change based on the cache held or when Netflix introduces new movies — there are multiple dimensions along which these movie files can change.



Quote for the day:


"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy


No comments:

Post a Comment