Daily Tech Digest -March 22, 2018

Five Pillars of Data Governance: Initiative Sponsorship

Initiative Sponsorship Data Governance GDPR
It’s an on-going initiative that requires active engagement from executives and business leaders. But unfortunately, the 2018 State of Data Governance Report finds lack of executive support to be the most common roadblock to implementing DG. This is historical baggage. Traditional DG has been an isolated program housed within IT, and thus, constrained within that department’s budget and resources. More significantly, managing DG solely within IT prevented those in the organization with the most knowledge of and investment in the data from participating in the process. This silo created problems ranging from a lack of context in data cataloging to poor data quality and a sub-par understanding of the data’s associated risks. Data Governance 2.0 addresses these issues by opening data governance to the whole organization. Its collaborative approach ensures that those with the most significant stake in an organization’s data are intrinsically involved in discovering, understanding, governing and socializing it to produce the desired outcomes. 


How Serverless Computing Reshapes Security

First and foremost, serverless computing, as its names implies, lowers the risks involved with managing servers. While the servers clearly still exist, they are no longer managed by the application owner, and are instead taken care of by the cloud platform operators — for instance, Google, Microsoft, or Amazon. Efficient and secure handling of servers is a core competency for these platforms, and so it's far more likely they will handle it well. The biggest concern you can eliminate is addressing vulnerable server dependencies. Patching your servers regularly is easy enough on a single server but quite hard to achieve at scale. As an industry, we are notoriously bad at tracking vulnerable operating system binaries, leading to one breach after another. Stats from Gartner predict this trend will continue into and past 2020. With a serverless approach, patching servers is the platform's responsibility. Beyond patching, serverless reduces the risk of a denial-of-service attack


Google parent's free DIY VPN: Alphabet's Outline keeps out web snoops

aphabetjigsawoutlinevpn.png
Outline promises to solve the double-edged sword of VPN services. There are loads of free VPN services, which in theory can protect sensitive information when using a public Wi-Fi network. However, as ZDNet's David Gewirtz has pointed out, you probably shouldn't entrust these with digging an encrypted tunnel between your computer and another machine. An alternative option is to pay around $120 a year for a VPN service, but again this requires trusting the provider and weighing up the jurisdiction it operates in. Outline offers journalists a cheaper way to set up their own VPN server on any cloud provider or on their own hardware, cutting out the need to trust a third party. "Outline gives you control over your privacy by letting you operate your own server. And Outline never logs your web traffic," Jigsaw product manager Santiago Andrigo wrote. "We made it possible to set up Outline on any cloud provider or on your own infrastructure so you can fully own and operate your own VPN and don't have to trust a VPN operator with your data."


Hexagonal Architecture as a Natural Fit for Apache Camel


Let's look at the two extremes: a layered architecture manages the complexity of a large application by decomposing it and structuring it into groups of subtasks of particular abstraction levels called layers. Each layer has a specific role and responsibility within the application and changes made in one layer of the architecture usually don't affect the components of other layers. In practice, this architecture splits an application into horizontal layers, and it is a very common approach for large monolithic web or ESB applications of the JEE world. On the other extreme is Camel, with its expressive DSL and route/flow abstractions. Based on the Pipes and Filters pattern, Camel would divide a large processing task into a sequence of smaller, independent processing steps (Filters) connected by a channel (Pipes). There is no notion of layers that depend on each other, and, in fact, because of its powerful DSL, a simple integration can be done in a few lines and a single layer only. In practice, Camel routes split your application by use case and business flow into vertical flows rather than horizontal layers.


Do Facebook Users Really Care About Online Privacy?

Facebook-Cambridge Analytica row: Do Facebook users really care about online privacy?
Facebook, since 2014, has a platform policy that clearly states what developers of third party apps can and cannot do. With regards to data, third party apps have to elaborate in their privacy policy on what data they are collecting and how they plan to use that data. These third-party apps must also delete any data received from Facebook. What Facebook also does now is moderate third party apps. Apps go through a review process where they must justify why that information is necessary for the app. Facebook characterizes "detailed information" as anything other than a user's friends, public profile, and email. Approval is only granted if apps can show that the information they are requested will be directly used. But Facebook's updated platform policy only came into place a year after the "Cambridge University researcher named Aleksandr Kogan had created a personality quiz app, which was installed by around 300,000 people who shared their data as well as some of their friends' data," as revealed by Zuckerberg in the public post.



Wide area Ethernet can fuel digital network transformation

Enter wide area Ethernet (WAE) services. WAE is a technology that has been around a long time but never gained the same level of adoption as other network services such as MPLS or consumer-type broadband services. In some ways Ethernet has always been a solution looking for a problem as its attributes didn't align cleanly with the challenges most businesses faced. Indeed, the network was considered by many to be a commodity -- the basic plumbing if you will -- where the information being transported was best-effort in nature. Because of this, network managers and procurement officers just went with what they knew, even though it was often considerably more expensive. DX is the problem that Ethernet and WAE have been waiting for. WAE directly addresses the business problems faced by digital organizations. ... Data continues to grow at exponential rates; 90% of all data that exists today, in fact, has been created in the past two years, according to ZK Research. IoT, video, mobile services and other data will only continue to add to the glut. 


9 machine learning myths
Machine learning is proving so useful that it's tempting to assume it can solve every problem and applies to every situation. Like any other tool, machine learning is useful in particular areas, especially for problems you’ve always had but knew you could never hire enough people to tackle, or for problems with a clear goal but no obvious method for achieving it. Still, every organization is likely to take advantage of machine learning in one way or another, as 42 percent of executives recently told Accenture they expect AI will be behind all their new innovations by 2021. But you’ll get better results if you look beyond the hype and avoid these common myths by understanding what machine learning can and can’t deliver. ... Think of it as anything that makes machines seem smart. None of these are the kind of general “artificial intelligence” that some people fear could compete with or even attack humanity. Beware the buzzwords and be precise. 


networksec.jpg
As organizations matured, they could use the cloud control plane tools to create NAC rules. While the interface required training, the concepts were similar. Traffic from one set of hosts was allowed or disallowed. However, the cloud security control plane does represent one of the first early challenges in hybrid IT security—a consistent operations control plane. As hybrid IT services become more complex, security professionals required more granular controls between the public cloud and private infrastructure. Take the universal example of the web and application tiers in a three-tier application as an example. Merely creating a firewall rule that allows traffic from the web-tier to the application-tier proved complex. Early private data center firewalls lacked the context of ephemeral cloud security objects. If the web-tier leveraged elastic compute, the public cloud administrator had to ensure that auto-scaled web servers were all created in the same network scope for the static firewall to properly filter traffic.


What would a regulated-IoT world look like?

IoT security hero image
Perhaps the most useful contrast to the U.S.’s lack of regulatory attention to IoT security issues is Europe, where the General Data Protection Regulation has provoked howls of outrage from the tech industry, but won praise from privacy rights advocates. GDPR, in essence, places the burden on companies to state clearly and up-front what types of user data will be gathered, and precisely what it will be used for. It also gives users the right to see data that has been collected about them, and to correct inaccuracies. It’s not wildly dissimilar to the most stringent data protection law currently on the books in the U.S. – the Health Insurance Portability and Accountability Act, better known as HIPAA. According to Sadeh, a more broad-based privacy protection law in the U.S., designed to address the threats posed by IoT and other technologies that have badly outstripped existing regulations, could easily resemble HIPAA with greater scope.



Google Is Working on Its Own Blockchain-Related Technology

The technology presents challenges and opportunities for Google. Distributed networks of computers that run digital ledgers can eliminate risks that come with information held centrally by a single company. While Google’s security is strong, it’s one of the largest holders of information in the world. The decentralized approach is also beginning to support new online services that compete with Google. Still, the company is an internet pioneer and has long experience embracing new and open web standards. To build its ledger, Google has looked at technology from the Hyperledger consortium, but it could opt for another type that may be easier to scale to run millions of transactions, one of the people familiar with the situation said. "Any time there’s a paradigm shift like this, there’s an opportunity for new giants to emerge -- but also for incumbents to adopt the new approach," said Elad Gil, a startup investor who worked on early mobile projects at Google more than a decade ago.



Quote for the day:


"Do not lose hold of your dreams or aspirations. For if you do, you may still exist but you have ceased to live." -- Thoreau


Daily Tech Digest - March 21, 2018

AI outpaces lawyers in reviewing legal documents


For the study, the lawyers and the LawGeex AI had to analyse five previously unseen contracts with 153 paragraphs of technical legal language, under controlled conditions precisely prepared the way lawyers review and approve everyday contracts. The highest performing lawyer stood in line with LawGeex AI by achieving 94% accuracy but the average accuracy achieved by the least performing lawyer stood at just 67%. The most notable difference in the test between machines and humans lies in the time factor: while it took LawGeex AI only 26 seconds to complete the task, the lawyers took average of 92 minutes. The longest time spent by the humans to accomplish the test was 156 minutes and the shortest time recorded was 51 minutes. Commenting on the study, Gillian K. Hadfield, Professor of Law and Economics at the University of Southern California said: “This research shows technology can help solve two problems – both making contract management faster and more reliable, and freeing up resources so legal departments can focus on building the quality of their human legal teams.”



Are You Just Keeping the Lights On in Your Datacenter?

The notion that IT struggles to move beyond their traditional role and into a more innovative one is very common. But, as the IDC statistic shows, IT is more often a cost center, rather than a source of innovation and revenue for the company. Why is this situation still so widespread? A core issue is that nearly everything in the datacenter is manual and not automated. Most datacenters have custom configurations that require their own manual maintenance with specialized tools. Incremental progress on any one or two help, but it isn’t enough to substantially change the big picture for the company. Over time, people get used to this status quo and start to think that it is completely normal. They fall into the trap of believing that a huge step forward toward automation and innovation is impossible.



Top 10 open source legal stories that shook 2017

Top 10 open source legal stories
In February 2017, GitHub announced it was revising its terms of service and invited comments on the changes, several of which concerned rights in the user-uploaded content. The earlier GitHub terms included an agreement by the user to allow others to "view and fork" public repositories, as well as an indemnification provision protecting GitHub against third-party claims. The new terms added a license from the user to GitHub to allow it to store and serve content, a default "inbound=outbound" contributor license, and an agreement by the user to comply with third-party licenses covering uploaded content. While keeping the "view and fork" language, the new terms state that further rights can be granted by adopting an open source license. The terms also add a waiver of moral rights with a two-level fallback license, the second license granting GitHub permission to use content without attribution and to make reasonable adaptations "as necessary to render the website and provide the service."


Descriptive Statistics: The Mighty Dwarf of Data Science

Consider a case where a monitoring system is to detect anomalies within the data. Typically, one may turn into the classic means of outlier analysis like the DBSCAN-based approaches or LOF. Nothing wrong with these, they may perfectly well point towards the directions where the outliers may be present. However, these techniques may require substantial computational resources to complete the task on high volumes of data in reasonably acceptable amount of time. A much faster alternative may come from considering the given case as a time series analysis problem. Such data coming from a system operating in ‘healthy’ conditions would have a typical, acceptable amplitude distribution and, in such scenario, any deviation from the expected shape may be considered a potential threat, worth detecting. A very fast descriptive statistic aimed at summarizing the shape of the distribution of the signal is called the ‘kurtosis’.


What Are the Limits of Forensic Data Retention?


Internet Service Providers will retain IP addresses of customers and the servers that they connect to. Armed with this information, forensic investigators can determine which websites suspects are accessing. However, most ISPs do not keep records of the actual content their subscribers access. There are a couple of reasons for this. First of all, keeping records of all content would be far more demanding on their servers. They simply don’t have the resources, even in the age of big data. Even if they wanted to keep these records, it would be impossible to see what content customers are accessing on most websites. Most websites have encrypted connections, so Internet Service Providers can’t tell what their users are doing on them. For example, since Facebook uses HTTPS connections, Internet service providers can’t read the customers’ messages or seewhat content they post on their Facebook feed. Nor can they see what they are searching for on Google.


AI key to do 'more with less' in securing enterprise cloud services

Artificial intelligence (AI), machine learning (ML), and predictive analytics applications may one day prove to be the key to maintaining control and preventing successful hacks, data breaches, and network compromise. These technologies encompass deep learning, algorithms, and Big Data analysis to perform a variety of tasks. The main goal of AI and ML is usually to find anomalies in systems and networks of note, whether it be suspicious traffic, unauthorized insider behavior and threats, or indicators of compromise. Able to evolve over time, the purpose of AI technologies is to learn, detect, and prevent suspicious and dangerous activities with improvements and refinements the longer such applications and systems are in use. This provides companies with a custom cybersecurity system which tailors itself to their requirements, in comparison to an off-the-shelf, traditional antivirus security solution -- which is no longer enough with so many threats lurking at the perimeter.


What is the leading IT cost-optimisation priority for CIOs?

IT Cost CIOs
This bolsters Gartner’s opinion that the most successful organisations are more likely to trust their IT organisation to manage their IT and digital technology spending. Respondents were also questioned about who manages the selection and approval of cost optimisation ideas. Those with visibility of both the IT shared services budget and all digital spending across the organisation reported that, on average, nearly half of their digital technology spending is paid for by the business. A quarter is paid for out of the IT budget, with chargeback to the business. “As you’d expect, CIOs have the most influence over the selection and approval of cost optimisation opportunities within IT shared services,” said Buchanan. “Interestingly, CIOs who focus on digital business opportunities have greater responsibility for cost optimisation than those who don’t. This suggests that CIOs are starting to exert influence over selecting and approving digital business ideas to optimise business costs.”


IoT in Healthcare: Balancing Patient Privacy & Innovation

Medical technology tends to lag behind other technologies, as the cost of mistakes at medical practices and hospitals can be astronomical. As a result, the field can lag behind when it comes to adopting the latest digital or IoT technologies. Patient privacy is a major issue, so all new technologies must be adopted carefully while adhering to various data compliance obligations that apply both to companies in general as well as healthcare organisations specifically. Managing clinics and hospitals is complex, and it’s expensive. Many healthcare organizations rely on multiple computer and networking systems. Through smart bracelets, administrators can better track patient movement, and they can determine how often patients meet with their doctors. In addition, IoT technology can make it easier to track and analyze patients’ vital signs and other metrics, offering invaluable feedback and resolution not possible with manually measurements. 


Organizing for digital industrial leadership

Organizing for digital industrial leadership
Digital industrial leadership is transforming the industrial world. For BHGE, specifically, data and analytics are fundamentally changing the way work gets done in our business and in the oil and gas industry as we prepare for the next big step-change in productivity. When I think of digital industrial leadership, I think about using data to move from looking in the rear-view mirror to looking into the future. We are beyond making decisions based on order history; we are using data to be predictive and make recommendations for future sales targets. As an example, we are using artificial intelligence on the shop floor to understand what drives disruptive, unscheduled downtime of our welding machines. When information technology meets operations technology, we learn what behaviors or indicators lead up to that unplanned downtime. We can use predictive analytics to do preventive maintenance and improve our productivity.


Credit Risk Prediction Using Artificial Neural Network Algorithm

To predict the credit default, several methods have been created and proposed. The use of method depends on the complexity of banks and financial institutions, size and type of the loan. The commonly used method has been discrimination analysis. This method uses a score function that helps in decision making whereas some researchers have stated doubts on the validity of discriminates analysis because of its restrictive assumptions; normality and independence among variables [4]. Artificial neural network models have created to overcome the shortcomings of other inefficient credit default models. The objective of this paper is to study the ability of neural network algorithms to tackle the problem of predicting credit default, that measures the creditworthiness of the loan application over a time period. Feed forward neural network algorithm is applied to a small dataset of residential mortgages applications of a bank to predict the credit default.


Quote for the day:


"Never allow someone to be your priority while allowing yourself to be their option." -- Mark Twain


Daily Tech Digest - March 20, 2018

The future of computer security is machine vs machine

security automation robot protects defends from attack intrusion breach
Because so much of our computing infrastructure will be protected and controlled by well-informed, cloud-based decision makers, the malware and hackers of the future will be forced to fight the centralized services first and foremost if they ever hope to spread. They will probably subscribe to these same services and look for holes, or subscribe to a malicious service that belongs to multiple services and looks for and sells weaknesses, much like some services do today fighting the accuracy of VirusTotal. This is where the future defense and attack scenarios start looking very machine versus machine. Our future defenses will be more centralized, coordinated, and automated. The hackers will have to do the same thing to stay ahead. If they don’t automate as much as or more than the defensive services do, they won’t be able to do as much badness. Hackers and malware will turn to automation and AI just as much as the defenders. When the defenders block the malicious thing that was being successful a few minutes ago, the malicious automated service will have to quickly respond. Whomever’s AI is better will ultimately win.


Empowering Citizen Data Scientists to Solve the AI Skills Shortage


Citizen data scientists are analysts with above average skills but without a formal academic background in data science, explained Ashley Kramer, vice president of Product Management at Alteryx, "I see the citizen data scientists as this emerging group," she said in an interview. "They don't have a degree in data science but they're more advanced than your average analyst. They are people with advanced capabilities like writing scripts within Excel. They are starting to get to that next level of being able to create predictive analytics. They need a little bit of help because they're not programmers." This is the group of what in an earlier era were called power users. Alteryx is designed to help them. "With the data scientist shortage," Kramer said, "we provide a platform that can be used by citizen data scientists in a code-free way, which is really important as they're learning the process." Alteryx offers a "code-friendly" way for budding data scientists to begin creating machine learning models for business requirements such as predicting and preventing equipment failures.


How complexity, multicloud sprawl, and need for maturity hinder hybrid IT


To some degree, we’ve already hit that inflection point where technology is being used in inappropriate ways. A great example of this—and it’s something that just kind of raises the hair on the back of my neck—is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “go cloud.” The board should be very business-focused and instead they're dictating specific technology, whether it’s the right technology or not. That’s really what this comes down to.  Another example is folks that try and go all in on cloud but aren’t necessarily thinking about what’s the right use of cloud—in all forms: public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer. We in the enterprise IT space haven't really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.


Enhancing digital infrastructure: Why it matters & the best strategies for your business

Change can be unsettling, and workers can understandably be apprehensive about any differences in role, focus or tasks, especially if it involves a technology element they don’t necessarily understand. For example; if moving to the ‘cloud’ you can’t just ‘buy cloud’ and hope everyone catches on to the concept overnight. No person or training course can transform single-handedly. There’s also a whole range of concepts to become familiar with for teams to work at the speed that cloud can enable. Focus on the move to ‘becoming cloud’ – a gradual process of change – which will transform the culture in a structured and less intimidating way. Help your staff by providing as much clarity as you can; translate top-line goals and priorities into specific metrics and KPIs for employees at all levels. Allow them time and space to experiment with new technology and let them know it’s ok to get things wrong! Encourage users to communicate and feedback so the right support is identified.


The evolution of systems requires an evolution of systems engineers

Nature-kaleidoscope
New frameworks, architectures, processes, and a thriving ecosystem of tools have emerged to help us meet those challenges. Some of these are in an embryonic state, but rapid adoption is driving quick maturity. We’ve seen this evolution in compute: it’s only been four years since containers became a mainstream technology, and we are now working with complex application-level abstractions enabled by tools like Kubernetes. A similar evolution is occuring with deployment, serverless, edge-computing technology, security, performance, and system observability. Finally, no changes can exist in a human and organizational vacuum. We have to develop the leadership skills necessary to build truly cross-functional teams and enable that rapid iteration needed to build these systems. We have to continue the work of the DevOps and SRE communities to break down silos and streamline transitions between teams and increase development velocity.


Surprisingly, These 10 Professional Jobs Are Under Threat From Big Data

Big Data threat to professional jobs (Source: Shutterstock)
When you read or hear news stories about the imminent takeover of robots and algorithms that will eliminate jobs for human workers, many times the first examples given are blue-collar jobs like factory workers and taxi drivers. And you may have mentally congratulated yourself because your “professional” job is safe from the threat of being outsourced to computers. But don’t feel so safe just yet. More and more, sophisticated algorithms and machine learning are proving that jobs previously thought to be the sole purview of humans can be done — as well or better — by machines. Boston Consulting Group has predicted that by 2025 as much as a quarter of jobs currently available will be replaced by either smart software or robots. A study out of Oxford University also suggested that as much as 35 percent of existing jobs in the U.K. could be at risk of automation inside the next 20 years.


IBM Watson Data Kits speed enterprise AI development

istock-871148930.jpg
More than half of data scientists said they spend most of their time on janitorial tasks, such as cleaning and organizing data, labeling data, and collecting data sets, according to a CrowdFlower report, making it difficult for business leaders to implement AI technology at scale. Streamlining and accelerating the development process for AI engineers and data scientists will help companies more quickly gain insights from their data, and drive greater business value, according to IBM. "Big data is fueling the cognitive era. However, businesses need the right data to truly drive innovation," Kristen Lauria, general manager of Watson media and content, said in the release. "IBM Watson Data Kits can help bridge that gap by providing the machine-readable, pre-trained data companies require to accelerate AI development and lead to a faster time to insight and value. Data is hard, but Watson can make it easier for stakeholders at every level, from CIOs to data scientists."


An Incredible New Type of Brain Implant Can Boost Memory by 15%

main article image
By fine tuning the electrical activity of the probes, the scientists aimed to activate key components of the brain's memory network only when it struggled to store memories, but not when it was working fine. The basic concept itself of boosting memorisation and recall through neural stimulation is old ground. Neuroscientists have gradually progressed from using non-invasive Transcranial Magnetic Stimulation techniques to deep brain stimulation in an effort to tickle the right pathways and encourage the brain to store and reconnect with memories. While there have been encouraging successes by precisely targeting areas such as the hippocampus and medial temporal lobes, the results haven't always been consistent. Part of the problem could be the choice of location, but another issue could be the method. Past efforts have used what's called an open loop system, meaning the stimulation wasn't tweaked in response to the brain's activity.


Hyperconverged infrastructure gets its own Gartner magic quadrant

data center hyperconvergence
Hyperconvergence is an IT framework that combines storage, virtualized computing and networking into a single system to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking. The promise of hyperconverged infrastructure is simplicity and flexibility compared with legacy solutions. The integrated storage systems, servers and networking switches are designed to be managed as a single system, across all instances of a hyperconverged infrastructure. As hyperconvergence has caught on among enterprises, major system vendors have gotten into the action by acquiring startups or bunding their servers with HCI software through OEM arrangements. Gartner’s new magic quadrant specifically focuses on vendors that develop the core hyperconvergence software. The new magic quadrant drops the system hardware requirement that’s part of the HCIS appliance model, Gartner says.


A.I. and speech advances bring virtual assistants to work

01virtual assistant faceoff2 100686099 orig
“It is that idea of ambient computing, the idea that at any time I could just say ‘Alexa start my meeting,’ ‘Alexa how are my sales figures?’ or ‘Alexa I forgot to shut off the projector in the conference room please shut that off for me.’ “It is very natural, it is very spontaneous,” Ibitski said. “And all of this is happening because of the advancements we have seen in what is called NLU, or natural language understanding. And that is the difference – it is understanding context.”  Collin Davis, general manager of Alexa for Business, said virtual assistants are already helping employees get work done. “What we are finding is a really interesting shift is happening, where voice is offering up almost another dimension of multi-tasking, where workers sitting at their desk can use Alexa almost as a vocal multi-taskerto be able to get information quickly without losing focus,” Davis said. “You could be working on a report and you need to know how many deals closed last quarter without having to reach into your pocket or find an app or switch websites – you just get the information that you need.”



Quote for the day:


"Leadership appears to be the art of getting others to want to do something you are convinced should be done." -- Vance Packard


Daily Tech Digest - March 19, 2018

Linux Foundation unveils open source hypervisor for IoT products

acrn.jpg
The Linux Foundation recently unveiled ACRN (pronounced "acorn"), a new open source embedded reference hypervisor project that aims to make it easier for enterprise leaders to build an Internet of Things (IoT)-specific hypervisor. The project, further detailed in a press release, could help fast track enterprise IoT projects by giving developers a readily-available option for such an embedded hypervisor. It will also provide a reference framework for building a hypervisor that prioritizes real-time data and workload security in IoT projects, the release said. ACRN is made up of the hypervisor and its device model, the release noted. This is complete with I/O mediators. Firms like Intel, LG Electronics, Aptiv, and more have already contributed to the project. "ACRN's optimization for resource-constrained devices and focus on isolating safety-critical workloads and giving them priority make the project applicable across many IoT use cases," Jim Zemlin, executive director of The Linux Foundation, said in the release.



Java at a crossroads: Why the popular programming language needs to evolve to stay alive

istock-655144482-1.jpg
Java is used most often in cloud computing, data science work, web development, and app development, said Karen Panetta, IEEE fellow and dean of graduate engineering at Tufts University. "I still see it evolving, and very popular," Panetta said. While languages such as Python are growing as well, Java is adapting to the increasing number of deep learning and machine learning workloads. "There's becoming a lot of libraries out there that are compatible for deep learning," Panetta said. "I think the fact that we keep talking about cloud computing and all of those things, that Java is still going to be the dominant player." Java also has built in more security options than Python, so it's a good option for Internet of Things (IoT) applications, Panetta said. Java has a foothold everywhere, and large user groups and libraries already written, making it a natural pathway for machine learning, Panetta said. "It's evolving to meet the needs," she added.


FPGA maker Xilinx aims range of software programmable chips at data centers

The first product range in the category is code-named Everest, due to tape out (have its design finished) this year and ship to customers next year, Xilinx announced Monday. Whether it’s an incremental evolution of current FPGAs or something more radical is tough to say since the company is unveiling an architectural model that leaves out many technical details, like precisely what sort of application and real-time processors the chips will use. The features that we do know about are consequential, though. Everest will incorporate a NOC (network-on-a-chip) as a standard feature, and use the CCIX (Cache Coherent Interconnect for Accelerators) interconnect fabric, neither of which appear in current FPGAs. Everest will offer hardware and software programmability, and stands to be one of the first integrated circuits on the market to use 7nm manufacturing process technology (in this case, TSMC’s). The smaller the manufacturing process technology, the greater the transistor density on processors, which leads to cost and performance efficiency.


Ethernet bandwidth costs fall to a six-year low


Cloud provider demand for more throughput increased last year's average bandwidth per switch port connection to almost 17 Gb from 12 Gb in 2016, according to the latest report from Crehan Research Inc., based in San Francisco. "Public, private and hybrid cloud providers are looking to deploy much faster networks within and between data centers in order to handle the myriad of new and existing applications that their customers need," Seamus Crehan, president of Crehan Research, said in a statement. "In turn, the data center switch vendors are responding by offering significantly more bandwidth at little or no additional cost." The net result in 2017 was impressive increases in Ethernet bandwidth, port shipments and revenue in the branded switch market, Crehan said. Revenue rose 10% -- the highest annual growth in four years. 


What CISOs must know about DFARS and NIST to be compliant

Privilege management and application control map to many of the different controls within the guidelines – and it’s hardly surprising given the proven effectiveness of the two security controls when combined with the visibility it provides. We know that privilege management allows admin rights to be applied to applications as needed – rather than giving the user too much access. Application control is the part that allows us to whitelist or blacklist an application from running at all. The good thing about these two technologies together is that they’re great a “bang-for-the-buck.” Between them, they overlap to address controls in access control, audit and accountability, configuration management, maintenance and system and information integrity. Compliance is crucial for CISOs because those who fail to comply will likely lose government contracts. Organizations that are able to demonstrate compliance at an early stage may be in a better position to secure additional wins.


IT’s Most Wanted: 16 Traits Of Indispensable IT Pros

IT̢۪s most wanted: 16 traits of indispensable IT pros
Taking fresh looks at old problems is an essential part of the digital transformations that are changing many organization’s cultures, leading to approaches like DevOps and agile and incorporating emerging technical solutions such as AI and IoT, says Christoph ...  “There is one thing that IT staff cannot afford — and that’s to stand still,” Goldenstern says. “The willingness to learn and keep evolving, making yourself vulnerable in the process, is absolutely essential to staying relevant and being a growth driver in a constantly evolving business.” ... “In order to succeed in IT you need to have the ability to look at a problem, analyze it, and find a way to solve it,” Martini says. “I look for people who understand their strengths and weaknesses. These are the people that are most capable of learning on the job, while still improving the team’s ability overall. Look at raw potential. If you have potential and the drive to improve, the rest will follow.”


Disaster and Contingency Planning Lessons from the ICU

It’s inaccurate to accuse Memorial Hospital of not having a disaster plan. Theirs was 246 pages long. They had a designated disaster coordinator. What they didn’t have was a leader who had looked ahead multiple steps. They also failed to convert the generic pieces into living, breathing human beings whose survival hinged on what moves they made next. No one at Memorial knew that the surrounding levees would break after Katrina, isolating the hospital. That the generators, whose move to a higher floor had always fallen to a lower budget priority than some other need, would be incapacitated by flooding. That the presence of patients of a provider that was leasing the seventh floor would multiply the census of extremely ill patients exponentially. ...  That same physician was subsequently charged with second-degree murder. She was accused of choosing to euthanize the sickest patients without their consent. A comprehensive review of the situation indicates that, at a minimum, people in authority were scrambling to deal with their pieces of the game. No one was watching the whole board.


Android Oreo: 18 advanced tips and tricks

Android Oreo statue at Google
Got a notification you don't want to deal with immediately — but also don't want to forget? Use Oreo's super-handy (but also super-hidden!) snoozing feature: Simply slide a notification slightly to the left or right, then tap the clock icon that appears along its edge. That'll let you send it away for 15 minutes, 30 minutes, one hour, or two hours and then have it reappear as new when the time is right. ... Another new Oreo feature is the system-level ability for launchers to display dots on an app's home screen icon whenever that app has a notification pending — yes, much like the notification badges on iOS. Unlike iOS, though, Android already has an excellent system for viewing and managing notifications, which can make this addition feel rather redundant and distracting. But wait! Here's a little secret: You can disable the dots — if you know where to look. Mosey on back to the Apps & Notifications section of your system settings, then tap the line labeled "Notifications" and turn off the toggle next to "Allow notification dots."


Predictive maintenance: One of the industrial IoT’s big draws

Predictive maintenance: One of the industrial IoT̢۪s big draws
CarForce is mostly focused on selling its product to garages, but Lora said that the potential beneficiaries are numerous. In the garage use case, mechanics can get real-time maintenance data from vehicles they service, which offers both the ability to warn customers of impending problems and to correlate large data sets together to help predict future reliability issues. It's a value-add because the garage can stay a step ahead of mechanical issues – an alert goes off, and the garage can contact the customer to schedule maintenance. Even an awareness that customer X might be coming in for an oil change on a given day can help with planning and scheduling. “If you look at the big data/AI path, step one is just seeing the data,” said Lora. It’s part of what she refers to as the “lilypad” approach to development – building one system to enable a leap to the next lilypad, and so on. CarForce plans to operate on a population level - predicting reliability and failures across big swaths of the automotive landscape.


The benefits of machine learning in network management


The problem with rule-based systems is they require maintenance and frequent updating as new rules are needed. It is often too cumbersome to create rules where numerous changes in the conditions require very different results. In addition, these systems are not very flexible. The rule sets may miss a problem if the rule set in question doesn't exactly match the problem's symptoms. It's much better to build a system that can learn about problems from the network experts who use it -- much like training a person who is new to the field of networking. Then, as new problems and solutions are found, the system would learn the symptoms and the resulting actions to take. Most of the industry agrees the integration of AI is among the benefits of machine learning. For our purposes, think of machine learning and deep learning as examples of neural network technology. A neural network is trained when it is fed a lot of data from the domain in question -- along with the appropriate answer or response. The neural network learns the appropriate response when presented with new data.



Quote for the day:


"My past has not defined me, destroyed me, deterred me, or defeated me, it has only strengthened me." -- Steve Maraboli


Daily Tech Digest - March 18, 2018

The Differences Between Machine Learning And Predictive Analytics

machine learning, predictive analytics
Machine learning applications can be highly complex, but one that’s both simple and very useful for business is a machine learning algorithm that compares employee satisfaction ratings to salaries. Instead of plotting a predictive satisfaction curve against salary figures for various employees, as predictive analytics would suggest, the algorithm assimilates huge amounts of random training data upon entry, and the prediction results are affected by any added training data to produce real-time accuracy and more helpful predictions. ... Predictive analytics can be defined as the procedure of condensing huge volumes of data into information that humans can understand and use. Basic descriptive analytic techniques include averages and counts. Descriptive analytics based on obtaining information from past events has evolved into predictive analytics, which attempts to predict the future based on historical data. This concept applies complex techniques of classical statistics, like regression and decision trees, to provide credible answers to queries


Creating With Cognitive


“Our IAIC initiatives were really born in response to our Clients. They said to us, ‘We’re intrigued by the concept of Cognitive Automation — but we don’t really know how it can affect our business’. We knew we had to create a safe place where companies could experiment combining process based automation solutions like Robotics Process Automation (RPA) and Business Process Management (BPM) with Autonomic and Cognitive Assets. Our process starts with an Education session to demystify different types of automation and arrive on common definitions. We then begin Ideating on how this technology could impact the overall organization and business processes. Here we take a hard look at User Experience, End to End Process Views and what the “art of the possible” could become in the future. Next, we move to a Strategy session, where we bring data scientists, business analysts, software engineers and developers together to re-imagine what the client — and their customers — really need to meet their expectations and begin applying the defined technologies to actual client use cases.


Data-Savvy Banking Organizations Will Destroy Everyone Else

“To win in today’s market and ensure future viability, it is essential that organizations capture value quickly, change direction at pace, and shape and deliver new products and services. Organizations also need to maximize the use of ‘always on’ intelligence to sense, predict and act on changing customer and market developments,” said Debbie Polishook, group chief executive, Accenture. The good news is that, despite what appears to be an ominous future, over 40% of executives see more opportunities than threats compared to two years ago. The key is to break down silos and leverage data and insights to support both internal and external business needs. Intelligent organizations have five essential ingredients that contribute to a lasting and impactful business process transformation. These essential provide the foundation for an agile, flexible and responsive organization that can act swiftly to market and consumer changes and be in a better position to succeed


Where NEXT for Tech Innovation in 2018?


Healthcare is an industry that is ripe for disruption. We will begin to see the power of IoT in healthcare with the emergence of inexpensive, continuous ways to capture and share our data, as well as derive insights that inform and empower patients. Moreover, wearable adoption will create a massive stream of real-time health data beyond the doctor’s office, which will significantly improve diagnosis, compliance and treatment. In short, a person’s trip to the doctor will start to look different – but for the right reasons. Samsung is using IoT and AI to improve efficiency in healthcare. Samsung NEXT has invested in startups in this area, such as Glooko which helps people with diabetes by uploading the patient’s glucose data to the cloud to make it easier to access and analyse them. Another noteworthy investment in this space from Samsung NEXT is HealthifyMe, an Indian company whose mobile app connects AI-enabled human coaches with people seeking diet and exercise advice.


Security Settles on Ethereum in First-of-a-Kind Blockchain Transaction

“It’s quite exciting that you can now leverage any clearing system, and it’s legally enforceable on even a public blockchain,” said Avtar Semha, founder and CEO of Nivaura, whose technology was used last year to issue an ethereum bond. Semha says it’s unclear with the note being issued Friday exactly how much will be saved on the overall cost of the transaction. But he added that in the ethereum bond last year the final cost was reduced from an estimated 40,000 pounds to about 50 pounds, “which is pretty awesome,” he said. Further, law firm Allen and Overy helped ensure the note was compliant, the Germany-based investment services firm Chartered Opus provided issuance services and Marex helped fix and execute the note within a “sandbox” created by the U.K. Financial Conduct Authority (FCA). As revealed for the first time to CoinDesk, on March 14, Nivaura also received full regulatory approval from the FCA that removed some restrictions and allows the company to operate commercially.


IoT and Data Analytics Predictions for 2018

With the increased collection of Big Data and necessity of advanced analytics, the year 2018 will witness high usage of cloud-based analytics software rather than on-premises software. Reports suggest that more than 50% of businesses will opt cloud-first strategy for their initiatives around big data analytics. AI will completely revolutionize the way the organizations work today. Enterprises will take full advantage of Machine Learning to optimize infrastructural behavior, transform business process, and improve the decision-making process. Gartner states that AI is just the starting of the 75-year technological cycle and it will utilize revenue for 30% of market-leading businesses. According to Gartner, natural language will play a dual role as a source of input for many business applications and for a variety of data visualizations. The operational transformation is necessary to adopt algorithmic business with DNNs (deep neural networks) in the year 2018. This will be a standardized component in the toolbox of more than 80% of data scientists.


Beyond Copy-Pasting Methods: Navigating Complexity


Why is agility a good idea? When you take Jeff Sutherland’s book, it says Scrum: Doing twice the work in half the time. At first sight that looks like a pure efficiency issue. But if we ask experienced agilists why they are doing agile, they usually come up with a list of challenges for which agile works better than other approaches: users only half understand their own requirements. Requirements change because the context changes. You have to build technological substitution into your design. Your solution is connected to many other things, and they all interact and change. But also inside the project you know there will be unpredictable problems and surprises. If we look behind the obvious: what is the common force that makes these challenges behave the way they do? The answer is complexity. Exploring complexity has a big advantage. Once we understand more about the complexity behind the problems which we are trying to solve with agile, we in fact clarify the purpose of our agile practice.


Building a new security architecture from the ground up

Overseeing an infrastructure that is operating thousands of servers is a burden on any architecture team. Moving those servers—all or in part—to the cloud takes patience and innovation. The innovation part, Fry said, is key because “most commercial security products are designed and built for specific use cases. Scale and complexity typically are not present,” meaning that architects in those situations need to adapt ready-built products to their networks or develop new tools from scratch, all of which takes time, money, and skill. Further, not all parts of the network can be treated equally; enterprise and customer-facing environments differ from test environments differ from production environments. When dealing with networks like those at Yahoo or Netflix, the need to think “outside the box” and innovate are, “not desirable; it’s a requirement,” said Fry. Though a security architect may be primarily concerned about security features and controls, the business is primarily concerned about availability and uptime.


Enterprises need a new security architecture: Graeme Beardsell

Today, data, applications, and users are outside the firewall and on the cloud, where they traverse the public Internet. To paraphrase, traditional security systems are guarding a largely empty castle. This means that enterprises need new approaches in their concept of security and to build new security architecture. Essentially, security should be designed to take advantage of the shape of the Internet and not try to defy it. The other challenge that is perhaps ‘invisible’ is the large number of vendors and solutions that each organization needs to manage. Analysts now advocate rationalizing multiple solutions by different vendors into suites of solutions by a single vendor to provide greater efficiency in productivity, and also towards solutions that share an integrated platform to facilitate data exchange and analysis.


10 Lessons Learned from 10 Years of Career in Software Testing

There is nothing wrong in getting certified but it’s not compulsory. A good tester needs to possess testing skills like sharp eye for details, analytical and troubleshooting skills etc. and I believe no certification can prove that you are good at those mentioned skills. While writing test cases, none of us would prefer to think about boundary value analysis and decision tables specifically. What one needs is application of common sense on knowledge. Who would like a person who indicates litter in your balcony and makes you sweep it? No matter if he is helping to make something clean, mostly he won’t be appreciated. That is how the profession is! You might or might not be appreciated for the quality improvement work you are doing but you need to understand importance of what you are doing. And on timely basis, you need to pat on your back for the work you are doing.



Quote for the day:



"Character is much easier kept than recovered." -- Thomas Paine


Daily Tech Digest - March 17, 2018

A Comparison Between Rust and Erlang


Erlang, being a high level, dynamic and functional language, provides lightweight processes, immutability, distribution with location transparency, message passing, supervision behaviors etc. Unfortunately, it is less than optimal at doing low-level stuff and is clearly not meant for that. ... Indeed, XML stanzas have to be read from the command line or from the network and anything coming from outside of the Erlang VM into is tedious to work with. You possibly know the odds. For this kind of use cases, one could be tempted to consider a different language. In particular, Rust has recently come to the foreground due to its hybrid feature set, which makes similar promises to Erlang’s in many aspects, with the added benefit of low level performance and safety. Rust compiles to binary and runs on your hardware directly just like your C/C++ program would do. How is it different from C/C++ then? A lot. According to its motto: “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety


What everyone gets wrong about touchscreen keyboards

closeup of retro typewriter key with the letter B for blog
They’re not thinking about technology they haven’t seen or other ways of working with a device they haven’t tried. Another reason for the opposition is that two-screen laptops aren’t new. We’ve seen the idea tried in the past ten years in the form of Canova’s Dual-Screen Laptop, the Acer Iconia 6120 Dual Touchscreen Laptop, the Toshiba libretto W105-L251 7-Inch Dual Touchscreen Laptop and others. These devices were unpleasant to use and were rejected by laptop buyers. Future two-screen laptops will be the opposite. Here are five reasons why you’ll love two-screen laptops. ... Apple’s patents show an iTunes and Apple Music interface that replaces the on-screen keyboard with music controls, such as an equalizer, when one of these applications is running. It’s easy also to imagine what kind of interfaces third-party developers could build: turntables for DJs, drawing pads for illustrators, advanced calculator keyboards for eggheads, speech notes for business presentations and game-specific game controls for games.



Don't Drown Yourself With Big Data: Hadoop May Be Your Lifeline

Just how "big" is Big Data? According to IBM, 2.5 quintillion bytes of data are created every day, and 90 percent of all the data in the world was created in the last two years. Realizing the value of this huge information store requires data-analysis tools that are sophisticated enough, cheap enough, and easy enough for companies of all sizes to use. Many organizations continue to consider their proprietary data too important a resource to store and process off premises. However, cloud services now offer security and availability equivalent to that available for in-house systems. By accessing their databases in the cloud, companies also realize the benefits of affordable and scalable cloud architectures. The Morpheus database-as-a-service offers the security, high availability, and scalability organizations require for their data-intelligence operations. Performance is maximized through Morpheus's use of 100-percent bare-metal SSD hosting.


How business intelligence in banking is shifting the paradigm

FinTech - Financial Technology - dollar sign, circuits, data
Banking has always been a competitive environment, even before the digitization of the industry acquired its present pace. Thanks to financial technology, the competition has become even tougher. Fintech companies to banks are what Uber is to taxis. And, as we know, taxi drivers aren’t happy about Uber. Apart from having their profits endangered by fintech companies, banks also experience extreme pressure from regulators. After the 2008 crisis, regulatory agencies, such as FRB, OCC and FDIC, are carefully watching banks. And while most of the banks didn’t participate in activities that led to the crisis, all of them have to follow strict compliance rules adopted after the market crash. Competitive business intelligence solutions for banking have to reflect all these requirements. They have to be flexible and transparent to adapt to competition and regulatory environment. They have to be scalable to keep up with the growing digitization of the industry, as more clients are starting to forget the last time they visited the bank physically.


4 steps to implementing high-performance computing for big data processing

istock-686690206.jpg
The message for company CIOs is clear: if you can avoid HPC and just use Hadoop for your analytics, do it. It is cheaper, easier for your staff to run, and might even be able to run in the cloud, where someone else (like a third party vendor) can run it. Unfortunately, being an all-Hadoop shop is not possible for the many life sciences, weather, pharmaceutical, mining, medical, government, and academic companies and institutions that require HPC for processing. Because file size is large and processing needs are extreme, standard network communications, or connecting with the cloud, aren't alternatives, either. In short, HPC is a perfect example of a big data platform that is best run in-house in a data center. Because of this, the challenge becomes—how do you (and your staff) assure that the very expensive hardware you invest in is the best shape to do the job you need it to do?


ONF puts focus on white box switches with Stratum project


ONF intends to make Stratum available on a broad selection of networking silicon and white box switches. Stratum will also work with existing deployed systems, as well as future versions of programmable silicon. Stratum uses recently released SDN interfaces and doesn't embed control protocols. Instead, it's designed to support external network operating systems or embedded switch protocols, like Border Gateway Protocol. In this way, ONF said the Stratum project will be more versatile and available for a broader set of use cases. Founding members of the Stratum project include Big Switch Networks, VMware, Barefoot Networks, Broadcom and Google -- which donated production code to initiate the project for open white box switches. "Google has contributed the latest and greatest, and just because it's Google [its participation in the project] makes it reasonably significant," Doyle said.


Wave Computing close to unveiling its first AI system

"A bunch of companies will have TPU knock-offs, but that's not what we do--this was a multi-year, multi millions of dollars effort to develop a completely new architecture," CEO Derek Meyer said in an interview. "Some of the results are just truly amazing." With the exception of Google's TPUs, the vast majority of training is currently done on standard Xeon servers using Nvidia GPUs for acceleration. Wave's dataflow architecture is different. The Dataflow Processing Unit (DPU) does not need a host CPU and consists of thousands of tiny, self-timed processing elements designed for the 8-bit integer operations commonly used in neural networks. Last week, the company announced that it will be using 64-bit MIPS cores in future designs, but this really for housekeeping chores. The first-generation Wave board already uses an Andes N9 32-bit microcontroller for these tasks, so MIPS64 will be an upgrade that will give the system agent the same 64-bit address space as the DPU as well as support for multi-threading so tasks can run on their own logical processors.


How to use Linux file manager to connect to an sftp server

linuxnetworkhero.jpg
The sftp command is quite easy. Open up a terminal window and log in with the command ssh USERNAME@IPADDRESS (Where USERNAME is the actual remote username and IPADDRESS is the address of the remote machine). Once logged in, you can then download files onto your local machine with the command get FILENAME (Where FILENAME is the name of the file). You can upload files with the command put FILENAME (Where FILENAME is the name of the file). But what if you don't want to work with the command line? Maybe you find the GUI a more efficient tool. If that's you, you're in luck, as most Linux file managers all have built-in support for SSH and its included tools. With that in mind, you can enjoy a GUI sftp experience, without having to install a third-party solution. As you might expect, this is quite easy to pull off. I'm going to demonstrate how to connect to a remote Ubuntu 16.04 server, via the sftp protocol, using both Elementary OS Pantheon Files and GNOME Files.


Deep Feature Synthesis Is the Future of Machine Learning

When data is conceptualized properly, sophisticated AI algorithms can make the most ingenious observations. Algorithms that have access to the right type of data may seem virtually omniscient. Unfortunately, real-world inputs can’t always be easily processed as the type of data that these algorithms depend on. At its core, machine learning depends on numerical data. Unfortunately, some qualitative data is not easily converted into a usable format. As human beings, we have one advantage over the AI algorithms that we sometimes expect to inevitably replace us. We understand the nuances of variables that aren’t easily broken down into strings of thousands of zeros and ones. The artificial intelligence solutions that we praise have yet to grasp this concept. The binary language that drives artificial intelligence has not changed in over half a century since it was first conceived.


4 key steps to building a comprehensive data strategy

As chief data officers and data scientists play more prominent roles in developing data strategies in many enterprises, we see organizations struggling to contend with these challenges and taking a shortsighted ‘save it now, worry about it later’ approach. These situations are worsening as data becomes more active and distributed across an enterprise, with many groups and individuals implementing unique and/or customized data management and storage solutions that often begin as unmanaged ‘aaS’ (as a service) projects and evolve into critical production systems with challenging data governance, security, access and cost management dynamics. Organizations that invest in developing and implementing a strategic data plan are fundamentally better prepared to anticipate, manage and capitalize on the increasing challenges and possibilities of data.



Quote for the day:


"There are things known and there are things unknown, and in between are the doors of perception." -- Aldous Huxley