Daily Tech Digest - June 23, 2019

Facebook's Libra Cryptocurrency Prompts Privacy Backlash

Facebook's Libra Cryptocurrency Prompts Privacy Backlash
Facebook's cryptocurrency plans have raised bipartisan concerns, with Rep. Patrick McHenry, R-N.C., telling The Verge: "It is incumbent upon us as policymakers to understand Project Libra. We need to go beyond the rumors and speculations and provide a forum to assess this project and its potential unprecedented impact on the global financial system." On Wednesday, the U.S. Senate Banking Committee announced it would hold a hearing about the company's cryptocurrency plans on July 16. So far, the committee has not released a list of witnesses it intends to call, according to Reuters. A Facebook spokesman tells Information Security Media Group: "We look forward to responding to lawmakers' questions as this process moves forward." Besides new concerns over its cryptocurrency plans, Facebook is already facing scrutiny from the U.S. Federal Trade Commission regarding its data-sharing practices, with the company preparing to pay as much as $3 billion fine. Facebook has been bound by an agreement with the FTC since 2011 that stems from previous privacy missteps, including sharing data without consent.



A CISO's Insights on Breach Detection

You have to identify what potentially anomalous behavior is, know what you're logging and reporting on, and make sure you have team members who are available to address these anomalies." Key steps, the CISO says, include using appropriate technologies, such as security incident and event monitoring tools, as well as effectively using security team resources "to conduct root cause analysis to identify what's going on." Parker will be a featured speaker at ISMG's Healthcare Security Summit in New York on June 25. He will join other CISOs and security experts who will address breach detection and an array of other top security challenges. In the interview (see audio link below photo), Parker also discusses: Conquering "alarm fatigue," which often slows the process of identifying breaches; Why many insider breaches are more difficult to detect than some incidents involving hackers; and The growing breach risks posed by supply chain vendors and other third parties, including incidents potentially involving compromised application programming interfaces.


Rise in business-led IT spend increases risks and opportunities


Despite commanding larger budgets for technology, CIOs also seem to be losing influence, with the percentage of CIOs sitting on the board falling from 71% to 58% in two years, according to the research. However, Bates does not think fewer CIOs sitting on the board will have a negative impact on business-led IT projects – or even IT projects in general. “CIOs continue to exert a strong degree of influence and are being joined by a new generation of technology-savvy executives like the chief technology officer, chief digital officer and chief data officer,” he said. “As organisations mature into this new paradigm of a coalition of technology leaders, there will be more effective governance at all levels. “We are at a moment in time where the CIO is still best positioned to advise the board and senior business leaders on technology and will increasingly have deep subject matter expertise from fellow executives to inform decision-making.” Beyond the disconnect between business and IT, another issue highlighted by the study is the slow progress in diversity and inclusion, with 74% of IT leaders polled saying related initiatives are, at most, “moderately successful”, with only minimal growth in women on tech teams – rising to 22% this year, compared with 21% last year.


MongoDB grows its solution portfolio while boosting its flagship platform

Positioning its document database as a platform for AI/machine learning app developers, MongoDB this week announced the beta of MongoDB Atlas Data Lake. This new serverless offering supports rich data analytics via the MongoDB Query Language. It supports multiple polymorphic data in multiple schema-free formats at any scale, compressed or uncompressed. It will support a consolidated user interface and billing with on-demand usage-based pricing. For storage, MongoDB Atlas Data Lake allows customers to “bring your own bucket” such as AWS S3, with MongoDB only charging customers for the ability to query the stored data through the Data Lake Service. It allows customers to query data quickly on S3 in any format, including JSON, BSON, CSV, TSV, Parquet and Avro, using the MongoDB Query Language. By bringing the MongoDB Query Language to the MongoDB Atlas Data Lake, this service enables developers to use that language across data on S3, making the querying of massive data sets easier and more cost-effective.


Using a Microservices Architecture to Develop Microapps


Let's look at microapps in terms of mobility. Because today we have a problem. Modern enterprises often have 20 or more web and mobile apps that you, as an employee, have to use just to get your job done. And how much functionality do you really use within these apps? It can be hard to find what we want, when we want. We are also suffering from app fatigue, both as app consumers in our personal lives and in our work lives as we deal with tens to hundreds of apps on our devices. As app developers we are also suffering, because these 20+ apps have to be maintained. We are also fielding requests for more and more apps, just adding to the maintenance pile. ... However, with a proper microapps platform, instead of marching through these steps for each new mobile experience we create, microapps allow us to do the hard and boring stuff once, and focus on the practical features and engaging experiences that we ultimately want to deliver! ... From a technical perspective, Kinvey Microapps enables you, as an app developer, to be dramatically more productive delivering mobile experiences to your users.


Ransomware gang hacks MSPs to deploy ransomware on customer systems

Hanslovan said hackers breached MSPs via exposed RDP (Remote Desktop Endpoints), elevated privileges inside compromised systems, and manually uninstalled AV products, such as ESET and Webroot. In the next stage of the attack, the hackers searched for accounts for Webroot SecureAnywhere, remote management software (console) used by MSPs to manage remotely-located workstations (in the network of their customers). According to Hanslovan, the hackers used the console to execute a Powershell script on remote workstations; script that downloaded and installed the Sodinokibi ransomware. The Huntress Lab CEO said at least three MSPs had been hacked this way. Some Reddit users also reported that in some cases, hackers might have also used the Kaseya VSA remote management console, but this was never formally confirmed. "Two companies mentioned only the hosts running Webroot were infected," Hanslovan said. 


Navigating the Path Toward Becoming an Intelligent Enterprise


From an operations standpoint, the Index indicates that 82 percent of surveyed companies are sharing information from their IoT solutions with employees more than once a day. This is an increase of 12 percent from the previous year. In fact, approximately two-thirds of these companies share operational data about enterprise assets, including status, location, utilization or preferences, in real- or near-real time to help drive better more timely decisions. This shows that brands are making the transition to Industry 4.0—using connected, automated systems to collect and analyze data during every step of their processes and bridging the gap between the digital and physical to maximize efficiency, productivity, and transparency. ... It is not an easy task to quantify how “intelligent” an enterprise is or how much the manufacturing and T&L space is changing to adopt IoT solutions. This intelligence cannot simply be determined by which technology solutions a company utilizes or how open-minded they are about new processes.


Why Cybersecurity Takeovers Are Surging As Stocks Reach New Highs

Cybersecurity investor Ron Gula noted that chatter of a forthcoming recession often allows private backers to put more pressure on startups to raise money, thus putting more pressure on them to cash out sooner. As more companies see rivals go the M&A or public route, “this can create a sense of urgency,” Gula told Fortune. Another factor driving the exit wave is the timing of the cybersecurity venture capital boom, which started about five years ago, making many companies ripe for an exit around the same time. Meanwhile, there are more potential buyers across industries. That's because companies not traditionally regarded as cybersecurity firms are looking to add the offering to their portfolios. “They see the benefit of saying, We have lots of data, we’re gonna look to add security to that data,” explained Enrique Salem, former CEO of cybersecurity company Symantec and a current investor at Bain Capital Ventures, per Fortune.


“This research highlights the fact that building a strong cyber security culture and subscribing to the right best practices can help organisations of any size maximise their security effectiveness,” said Wesley Simpson, (ISC)2 chief operating officer. “It’s a good reminder that in any partner ecosystem, the responsibility for protecting systems and data needs to be a collaborative effort, and multiple fail safes should be deployed to maintain a vigilant and secure environment. The blame game is a poor deterrent to cyber attacks.” Nearly two-thirds (64%) of large enterprises outsource at least a quarter (26%) of their daily business tasks, which requires them to allow third-party access to their data. These outsourced functions can include anything from research and development, to IT services and accounts payable. This data access and sharing is necessary as a large enterprise scales its operations, but the research indicates that access management and vulnerability mitigation is often overlooked.


Top 5 Aspects That Can Strengthen Your Data Governance Framework

There is a reason why the term ‘data dump’ is popular. The only job of a data source is to collect information and ‘dump’ it where you can access it. This is why businesses have to sift through petabytes of data just to find something meaningful to gain business insights from. It is only after this data has been categorized into usable, helpful portions that it starts being realized as an asset. Data quality is, therefore, the simple act of converting raw data into a usable form and maintaining it as an asset. Data governance helps you uncover new sources of information and draw better business value from your data. It can also identify broken/missing pieces of information and prevent duplicates from interfering with one another. Through data governance, outdated information can be flagged for attention, and critical data can be highlighted to the right teams within the organization. Broken links, incomplete files, incorrect prioritization, etc. are all incidents that greatly affect data quality. Data governance practices help fix such occurrences and also maintain it.



Quote for the day:


"Character matters; leadership descends from character." -- Rush Limbaugh


Daily Tech Digest - June 22, 2019

Why AI is here to stay

Image result for robot ai
So here’s why AI is not a fad: in real life, there’s no way I’m giving up my ability to fall back on teaching with examples if I’m not clever enough to come up with the instructions. Absolutely not! I’m pretty sure I use examples more than instructions to communicate with other humans when I stumble around the real world. AI means I can communicate with computers that second way — via examples — not only by instructions, are you seriously asking me to suddenly gag my own mouth? Remember, in the old days we had to rely primarily on instructions only because we couldn’t do it the other way, in part because processing all those examples would strain the meager CPUs of last century’s poor desktops. But now that humanity has unlocked its ability to express itself to machines via examples, why would we suddenly give that option up entirely? A second way of talking to computers is too important to drop like yesterday’s shoulderpads. What we should drop is our expectation that there’s a one-size-fits-all way of communicating with computers about every problem. Say what you mean and say it the way that works best.


Pledges to Not Pay Ransomware Hit Reality

"I don't think you can make a blanket statement of 'pay the ransom' or 'don't pay the ransom,'" says Adam Kujawa, director of the research labs at security firms Malwarebytes. "If you have failed to segment your data or your network, or failed to check your backups or other measures to get your company back on track quickly, then you will have to deal with the fallout." One problem for companies: Ransomware operators have shifted away from blanketing consumers and businesses with opportunistic ransomware attacks and now almost exclusively target business and municipalities. Along with that shift, the cost of ransoms has quickly grown because such organizations can afford to pay. Now, many organizations are faced with seven-digit ransom demands, Zelonis says. "That's a heck of a payday," he adds. The increase in ransom demands is driven by attackers' targeting and research on victims, he says.


End of the line for Internet Explorer 10 might mean updating embedded systems


Microsoft hasn't given specific dates yet; IE11 is coming to the Update Catalog sometime in spring 2019 (which likely means before the end of June), with the other upgrade options coming later in 2019. That means you won't have many months to test and validate IE11 on any systems where you're still using IE10, so you will want to plan your test labs and pilot rings now. Microsoft deliberately didn't put the new Edge browsing engine into IE11 because of enterprise concerns that it might cause compatibility problems. Instead, it still uses the Trident engine and includes document modes that emulate the IE5, IE7, IE8, IE9 and IE10 rendering engines. There are also specific Enterprise Modes to emulate IE8, and IE8 in Compatibility View, but if your sites worked in IE 10 you won't need those. What you will need to change are sites that have the x-ua-compatible meta tag or HTTP header set to 'IE=edge'; in IE10 that means Internet Explorer 10 mode, but in IE11 it means Internet Explorer 11 mode, because it's just asking for the latest IE version. Set it to 'IE=10' if the site has problems.


Expect graph database use cases for the enterprise to take off

As useful as graph databases are for many certain types of queries and analysis, graph tools will present several challenges to CIOs, Moore warned. Data engineers and business experts need to learn new skill sets and create new workflows for defining and refining the graph data models used for these applications. Classical SQL databases were optimized to conserve memory and CPU. They are still the best technology for many kinds of applications such as ERP that involve doing a lot of columnar addition. But joining database tables together to do new kinds of queries can add considerable overhead to SQL databases. As a result, new types of queries can be limited by memory capacity. In contrast, graph databases, as noted, precompute these relationships in a way that speeds analytics and shrinks the size of the data store. In one project, Moore said he managed to shrink a 5 TB SQL database into a 2 TB graph database. A big challenge that must be factored into graph database use cases is their slower performance when writing to the database.


7 Types Of Artificial Intelligence

uncaptioned
Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type. Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to “think” and perhaps even “feel” like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.


The logic of digital change

Disruption may be a yawn, but the fact is that the internet is changing things slowly but surely, and specifically it began when cloud and APIs allowed start-ups to bootstrap and launch on a shoestring. Now, there are 12,000 start-ups globally getting investments that have been doubling down each year – $111.8 billion last year – and so there is something happening. Don’t be complacent. Nothing may have happened in the last quarter century but something will happen in the next, and only the banks that adapt will survive, as Charles Darwin would say. ... there is specifically a fourth revolution of humanity occurring where the people who historically could not be reached by banks are now being reached by technology. The financially illiterate, the folks who aren’t worth it, the financially vulnerable, the unbankable, are all getting to be included because that’s what digital does. In a world where we distribute money physically, you cannot afford to deal with someone in a remote African village; in a world where distribute money digitally, even the guy sitting in a village near the base camp of Mount Everest can trade and transact.


A.I. Ethics Boards Should Be Based on Human Rights


Human rights are imperfect ideals, subject to conflicting interpretations, and embedded in agendas with “outsized expectations.” Though supposedly global, human rights aren’t honored everywhere. Nevertheless, the United Nations Universal Declaration of Human Rights is the best statement ever crafted for establishing all-around social and legal equality and fundamental individual freedoms. The Institute of Electrical and Electronics Engineers rightly notes that human rights are a viable benchmark, even among diverse ethical traditions. “Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age.” Technology companies should embrace this standard by explicitly committing to a broadly inclusive and protective interpretation of human rights as the basis for corporate strategy regarding A.I. systems. They should only invite people to their A.I. ethics boards who endorse human rights for everyone.



Accelerating Digital Innovation Inside & Out


Not only are digitally maturing companies more likely to use cross-functional teams, those teams generally function differently in more mature organizations than in less mature organizations. They’re given greater autonomy, and their members are often evaluated as a unit. Participants on these teams are also more likely to say that their cross-functional work is supported by senior management. For more advanced companies, the organizing principle behind cross-functional teams is shifting from projects toward products. Digitally maturing companies are more agile and innovative, but as a result they require greater governance. Organizations need policies that create sturdy guardrails around the increased autonomy their networking strength allows. Digitally maturing companies are more likely to have ethics policies in place to govern digital business. Policies alone, however, are not sufficient. Only 35% of respondents across maturity levels say their company is talking enough about the social and ethical implications of digital business.


Three hacking trends you need to know about to help protect yourself


"The blurred lines between the techniques used by nation-state actors and those used by criminal actors have really gotten a lot fuzzier," says Jen Ayers, vice president of OverWatch cyber intrusion detection and security response at CrowdStrike. "Many criminal organisations are still very loud, but the fact is rather than going the traditional spam email route that they have been before, they are actively intruding onto enterprise networks, they are targeting unsecured web servers and going in, stealing credentials and doing reconnaissance," she adds. This is another tactic which malicious threat actors are beginning to deploy in order to both avoid detection and make attacks more effective – conducting campaigns that don't focus on Windows PCs and other common devices used in the enterprise. With these devices sitting in front of users every single day, and a top priority for antivirus software, there's a higher chance that an attack on these devices will either be prevented by security measures or spotted by users.


Data Strategy: Essential elements to enhance it

Elena Alfaro, head of data and open innovation at the client solutions division of the Spanish bank BBVA, described her organization's work of "spreading the culture of data" and ensuring that the senior leadership of an organization is on board with the data initiatives. "What I've learned is if the person you're sitting with doesn't understand, it is very difficult to get to something big," said Alfaro. For the past two years, Forrester has ranked the BBVA's mobile app the best in the banking business. Forrester's Aurelie L'Hostis credited the bank's app for "striking a superb balance between useful functionality and excellent user experience," a product that Alfaro says grew out of a data strategy with the end user in mind. "Digital banks listen to their customers, they're clever with data, and they work hard on making it easy for customers to manage their financial lives," L'Hostis writes. "It's not a small feat, but that's what your customers are demanding." But regardless of the industry, Wixom argues that companies with a successful data strategy implement a framework that ensures a high level of data integrity and makes sure that it is broadly and easily accessible.



Quote for the day:


"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano


Daily Tech Digest - June 21, 2019

Defining a Test Strategy for Continuous Delivery

Image title
Defining the test cases requires a different mindset than implementing the code. It's better that the test cases are not defined by the same person that implemented the feature. Implementing good automated tests requires serious development skills. This is why, if there are people on the team that are just learning to code (for example testers that are new to test automation), it's a good idea to make sure that the team is giving them the right amount of support to skill up. This should be done through pairing, code review, knowledge sharing sessions. Remember that the entire team owns the codebase. Don't fall into the split ownership trap, in which production code is owned by the devs and test code is owned by the testers. This hinders knowledge sharing, introduces test case duplication and can lead to a drop in test code quality. Developers and testers are not the only ones that care about the quality. Ideally, the Product Owner should define most of the acceptance criteria. She is the one that has the best understanding of the problem domain and its essential complexity. So she should be a major contributor when writing acceptance criteria.



Blockchain expert Alex Tapscott sees coming crypto war as 'cataclysmic'

Digital technology has had a profound impact on virtually every aspect of our lives – except for banking. The institutions we rely on as trusted intermediaries to move, store and manage value, exchange financial assets, enable funding and investment and insure against risk, are more-or-less unchanged since the advent of the internet. This is changing, thanks to blockchain. Libra is only the latest in a wave of revolutionary new innovations that is beginning to disrupt the old model. Bitcoin remains the most consequential and important innovation in at least a generation. It laid the ground work for a new internet of value that promises to do to value industries, like financial services, what the internet did to information industries, like publishing. At first, the impact on banks will be muted. In fact, Facebook will need to rely on some existing banking infrastructure to successfully launch Libra. Over time, however, Libra could cut banks out of many aspects of the industry altogether. I share the same deep belief that Bitcoin will do the same.


The downfall of the virtual assistant (so far)

Virtual Assistant
We've talked plenty about the reasons why everyone and their mother wants you to get friendly with their flavor of robot aid — and why that, in turn, has led to what I call the post-OS era, in which a device's operating system is less important than the virtual assistant threaded throughout it. It's no coincidence that Google is slowly expanding Assistant into a platform of its own, and what we're seeing now is almost certainly just the tip of the iceberg. Something we haven't discussed much, though, is a painful reality that often gets overlooked in all the glowing coverage about this-or-that new virtual assistant gizmo or feature. And for anyone who ever tries to rely on this type of talking technology — be it for on-the-go answers from your phone, on-the-fly device control in your home, or hands-free help in your office — it's a reality that's all too apparent. The truth is, for all of their progress and the many ways in which they can be handy, voice assistants still fail far too frequently to be dependable. And the more Google and other companies push their virtual assistants and expand the areas in which they operate, the more pressing the challenge to correct this problem becomes.


Introduction to Reinforcement Learning


Why are we talking about all this? What does this mean to us, except that we need to have pets if we want to become a famous psychologist? What does this all have to do with artificial intelligence? Well, these topics explore a type of learning in which some subject is interacting with the environment. This is the way we as humans learn as well. When we were babies, we experimented. We performed some actions and got a response from the environment. If the response is positive (reward) we repeated those actions, otherwise (punishment) we stopped doing them. In this article, we will explore reinforcement learning, type of learning which is inspired by this goal-directed learning from interaction. ... Another type of learning is unsupervised learning. In this type of learning, the agent is provided only with input data, and it needs to make some sort of sense out of it. The agent is basically trying to find patterns in otherwise unstructured data. This type of problem is usually used for classification or clusterization types of problems.


Cyberwarfare escalation just took a new and dangerous turn


In the murky world of espionage and cyberwarfare, it's never entirely clear what's going on. Does the US really have the capabilities to install malware in Russian energy systems? If so, why would the intelligence agencies be comfortable (as they seem to be) with the story being reported? Is this an attempt to warn Russia and make its government worry about malware that might not even exist? But beyond the details of this particular story, there are at a number of major concerns here -- particularly around unexpected consequences and the escalation of cyberwarfare risks. It's very hard for a company (or a government) to tell the difference between hackers probing a network as part of general reconnaissance and the early stages of an attack itself. So even probing critical infrastructure networks could raise tensions. There's significant risk in planting malware inside another country's infrastructure with the aim of using it in future. The code can be discovered, which is at the very least embarrassing and, worse, could be seen as a provocation. It could even be reverse-engineered and used against the country that planted it.


Nutanix XI IoT: An Overview For Developers

By distributing the computing part of the problem to the edge, we can execute detection-decision-action logic with limited latency. For example, immediate detection might mean a defective product never leaves the production line, much less makes it to the customer. The consequences of receiving a defective item can range from inconvenient to catastrophic. If it is an article of clothing, the article might require a return. While this may have a range of negative consequences to the business, it does not compare to the consequences of having a defective part installed in an aircraft. Edge computing of data created by IoT edge devices can clearly benefit business, but as we mentioned earlier, as the number and diversity of devices grows, so does the workload for developers attempting to write applications for these devices. Configuring devices, networking devices, managing devices and data streams … these are all tasks that distract developers from the primary task at hand: creating the applications that use IoT data to serve the needs of your business.


Blockchain and AI combined solve problems inherent in each


Best known as the technology that powered bitcoin, blockchain offers an immutable record of every transaction, ensuring that all nodes have the same version of the truth and no records are tampered with. That makes it a relatively fail-safe and hack-proof method for storing and transferring monetary value. But to ensure this safety, the nodes have to go through huge calculations to ensure the validity of the transactions. Blockchain's mechanism for ensuring safety is also its weakness, as it limits scalability. The same is true for blockchain's immutability; every record needs to store the entire history of all transactions. The problems associated with AI are different. AI needs data to operate, but getting good data can be problematic. For instance, hackers can alter the data a machine is trained on with a data poisoning attack. Collecting data from clients is also problematic, especially in light of data privacy laws such as Europe's GDPR. Finally, most of the data needed for effective AI is owned by large organizations, such as Google and Facebook.



In an effort to ensure the UK’s resilience to attacks that exploit vulnerabilities in network-connected cameras, the SCC said the minimum requirements were an important step forward for manufacturers, installers and users alike. The work has been led by Mike Gillespie, cyber security advisor to the SCC and managing director of information security and physical security consultancy Advent IM, along with Buzz Coates, business development manager at CCTV distributor Norbain. The standard was developed in consultation with surveillance camera manufacturers Axis, Bosch, Hanwah, Hikvision and Milestone Systems. Speaking ahead of the official launch, Gillespie said that if a device came out of the box in a secure configuration, there was a good chance it would be installed in a secure configuration. “Encouraging manufacturers to ensure they ship their devices in this secure state is the key objective of these minimum requirements for manufacturers,” he said. Manufacturers benefit, said Gillespie, by being able to demonstrate that they take cyber seriously and that their equipment is designed and built to be resilient.


3 top soft skills needed by today’s data scientists


Data scientists who can understand the business context, plus the technical side of the equation, will be invaluable. This kind of “bilingual” talent can turn data streams into a predictive model, and then translate that model into a working reality, such as for financial forecasting. Core skills in storytelling, problem solving, agile development, and design thinking are critical to interoperating within different business contexts as well. The key is to develop T-shaped skillsets, as opposed to being I-shaped. While I-shaped people have a deep, narrow understanding of one area (like data engineering or data science), T-shaped people have both in-depth knowledge in one area and a breadth of understanding of several others. It is easier for T-shaped people to meld their data expertise to a broad range of use cases and industries. ... The communication side will be especially important as data expertise gets pulled into interdisciplinary use cases. Data scientists will have to be able to talk to people with different backgrounds. This goes back to the need to be more T-shaped to effectively translate highly technical ideas to different business contexts.


Using OpenAPI to Build Smart APIs for Dumb Machines

OpenAPI isn’t the only spec for describing APIs, but it is the one that seems to be gaining prominence. It started life as Swagger and was rebranded OpenAPI with its donation to the OpenAPI initiative. RAML and API Blueprint have their own adherents. Other folks like AWS, Google, and Palantir use their own API specs because they predate those other standards, had different requirements or found even opinionated specs like OpenAPI insufficiently opinionated. I’ll focus on OpenAPI here because its surging popularity has spawned tons of tooling. The act of describing an API in OpenAPI is the first step in the pedagogical process. Yes, documentation for humans to read is one obvious output, but OpenAPI also lets us educate machines about the use of our APIs to simplify things further for human consumers and to operate autonomously. As we put more and more information into OpenAPI, we can start the shift the burden from humans to the machines and tools they use. With so many APIs and so much for software developers to know, we’ve become aggressively lazy by necessity. APIs are a product; reducing friction for developers is a big deal.



Quote for the day:


"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.


Daily Tech Digest - June 20, 2019

Researchers say 6G will stream human brain-caliber AI to wireless devices


The most relatable one would enable wireless devices to remotely transfer quantities of computational data comparable to a human brain in real time. As the researchers explain it, “terahertz frequencies will likely be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” Put another way, a wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site. Some of that might sound familiar, as similar remote control concepts are already in the works for 5G — but with human operators. The key with 6G is that all this computational heavy lifting would be done by human-class artificial intelligence, pushing vast amounts of observational and response data back and forth. By 2036, the researchers note, Moore’s law suggests that a computer with human brain-class computational power will be purchasable by end users for $1,000, the cost of a premium smartphone today; 6G would enable earlier access to this class of computer from anywhere.



Serverless Computing from the Inside Out

Fundamentally, cybersecurity isn't about threats and vulnerabilities. It's about business risk. The interesting thing about business risk is that it sits at the core of the organization. It is the risk that results from company operations — whether that risk be legal, regulatory, competitive, or operational. This is why the outside-in approach to cybersecurity has been less than successful: Risk lives at the core of the organization, but cybersecurity strategy and spending has been dictated by factors outside of the organization with little, if any, business risk context. This is why we see organizations devoting too many resources to defend against threats that really aren't major business risks, and too few to those that are. To break the cycle of outside-in futility, security organizations need to change their approach, so they align with other enterprise risk management functions. And that approach is to turn outside-in on its head, and take an inside-out approach to cybersecurity. Inside-out security is not based on the external threat landscape; it's based on an enterprise risk model that defines and prioritizes the relative business risk presented by organizations' digital operations and initiatives. 


Post-Hadoop Data and Analytics Head to the Cloud

Image: 4x-image - iStockphoto
Gartner analyst Adam Ronthal said that while there are some native Hadoop options available in public clouds like AWS, they may not be the best solution for many applications. "There's a fair bit of complexity that goes into managing a Hadoop cluster," he told InformationWeek. Non-Hadoop-based cloud solutions may look simpler and easier to organizations that are evaluating data and analytics solutions. But that doesn't mean there's not a place for Hadoop in the future. Ronthal said that Hadoop is experiencing a "market correction" rather than an existential crisis. There are use cases that Hadoop is really good at, he said. But a few years back, Hadoop was the rock star technology that was the solution to every problem. "The promises out there 3, 4, or 5 years ago were that Hadoop was going to change the world and redefine how we did data management," he said. "That statement overpromised and underdelivered. What we are really seeing now is recognition of workloads that Hadoop is really good at, like the data science exploration workloads."


Artificial intelligence could revolutionize medical care. But don’t trust it to read your x-ray just yet


The algorithms learn as scientists feed them hundreds or thousands of images—of mammograms, for example—training the technology to recognize patterns faster and more accurately than a human could. “If I’m doing an MRI of a moving heart, I can have the computer predict where the heart’s going to be in the next fraction of a second and get a better picture instead of a blurry” one, says Krishna Kandarpa, a cardiovascular and interventional radiologist at the National Institute of Biomedical Imaging and Bioengineering in Bethesda, Maryland. Or AI might analyze computerized tomography heads scans of suspected strokes, label those more likely to harbor a brain bleed, and put them on top of the pile for the radiologist to examine. An algorithm could help spot breast tumors in mammograms that a radiologist’s eyes risk missing. But Eric Oermann, a neurosurgeon at Mount Sinai Hospital in New York City, has explored one downside of the algorithms: The signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.


Cybersecurity Risk Assessment – Made Easy 

Cyber Risk Assessment
Cybersecurity Risk Assessment is critical because cyber risks are part and parcel of any technology-oriented business. Factors such as lax cybersecurity policies and technological solutions that have vulnerabilities expose an organization to security risks. Failing to manage such risks provides cybercriminals with opportunities for launching massive cyberattacks. But fortunately a cybersecurity risk assessment allows a business to detect existing risks. A cybersecurity risk assessment also facilitates risk analysis and evaluation to identify vulnerabilities with higher damage potential. As a result, a business can identify suitable controls for addressing the risks. ... Cybersecurity risk assessments have many other benefits, all aimed at bolstering organizational security. Cybersecurity risk assessments are critical for any company to harden its cybersecurity. Most importantly, they are the method for a company to identify the most suitable security controls needed to achieve an optimum cybersecurity approach.


Cybersecurity Accountability Spread Thin in the C-Suite

"CEOs are no longer looking at cyber-risk as a separate topic. More and more they have it embedded into their overall change programs and are beginning to make strategic decisions with cyber-risk in mind," says Tony Buffomante, global co-leader of cybersecurity services at KPMG. "It is no longer viewed as a standalone solution."  That sounds good at the surface level, but other recently surfaced statistics offer grounding counterbalance. A global survey of C-suite executives released last week by Nominet indicates these top executives have some serious gaps in knowledge about cybersecurity, with around 71% admitting they don't know enough about the main threats their organizations face. This corroborates with a survey of CISOs conducted earlier this year by the firm that indicates security knowledge and expertise possessed by the board and C-levels is still dangerously low. Approximately 70% of security executives agree that at least one cybersecurity specialist should be on the board in order for it to take appropriate levels of due diligence in considering the issues.


Industrial IoT as Practical Digital Transformation

Industrial IoT as Practical Digital Transformation
To navigate this journey in the face of both uncertainty and hype, their company leaders chose a measured approach of “practical” digital transformation. To begin, they adopted IoT through an iterative process of incremental value testing. Notably, they selected goals for increasing internal effectiveness rather than fixating on new customer offerings. As a result, usage data from equipment inside customer facilities now empowers a more cost-effective services team and reduces truck rolls. Furthermore, understanding how their machines are operated in the field enables product teams to proactively identify problem areas and continuously improve their equipment offerings. Both use cases are internal rather than directly customer-facing. Yet it’s their customers who ultimately benefit from higher operational productivity enabled by these ever-smarter systems. Moving forward, machine utilization numbers will better prepare sales teams for guiding customers toward systems best matching their true capacity needs, as well as inform warranty management issues. Connected systems create opportunities for exceeding customer expectations at every turn.


Why the Cloud Data Diaspora Forces Businesses to Rethink their Analytics Strategies

The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes. More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads -- basically on the fly -- and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it. ... A lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. 


Brexit GDPR and the flow of data: there could be one winner and that’s the cyber criminal

Brexit GDPR and the flow of data: there could be one winner and that̢۪s the cyber criminal image
Huseyin advocates technology. It may not come as a shock to learn he advocated a product from nsKnox. He refers to Cooperative Cyber Security which allows data to be shared across organisations and networks in a way that is completely cryptographic and shredded. “If you can take information with identifiers and put it into a form which is actually meaningless and shred it cryptographically and then distribute it to the partners of the data consortium who want to be able to access that information, you’re now pushing data around the world potentially without ever exposing the actual underlying information. “So for example, we could take your name and we can shred it and we can distribute it to let’s say two banks in Europe and two banks in the UK. Each of those banks holds a piece of information and collectively that information makes up your name, but individually those pieces of information are just bits of encrypted binary data. So totally meaningless.“ So that’s two potential solutions, get close to the regulator and apply appropriate technology.


How to Use Open Source Prometheus to Monitor Applications at Scale

Using Prometheus, we looked to monitor “generic” application metrics, including the throughput (TPS) and response times of the Kafka load generator (Kafka producer), Kafka consumer, and Cassandra client (which detects anomalies). Additionally, we wanted to monitor some application-specific metrics, including the number of rows returned for each Cassandra read, and the number of anomalies detected. We also needed to monitor hardware metrics such as CPU for each of the AWS EC2 instances the application runs on, and to centralize monitoring by adding Kafka and Cassandra metrics there as well.  To accomplish this, we began by creating a simple test pipeline with three methods (producer, consumer, and detector). We then used a counter metric named “prometheusTest_requests_total” measured how many times each stage of the pipeline executes successfully, and a label called “stage” to tell the different stage counts apart (using “total” for the total pipeline count). We then used a second counter named “prometheusTest_anomalies_total” to count detected anomalies.



Quote for the day:


"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous


Daily Tech Digest - June 19, 2019

RPA use cases that take RPA to next level


"Start with a small piece of a larger process and take on more," he said. "Then, look upstream and downstream, and ask, 'How do we take that small use case and expand the scope of what the bot is automating? Can we take on more steps in the process, or can we initiate the automation earlier in the process to grow?'" At the same time, Abel said CIOs should create an RPA center of excellence and develop the tech talent needed to take on bigger RPA use cases. He agreed that a strong RPA governance program to ensure the bots are monitored and address any change control procedures is crucial. It's also essential to maintain a strong data governance program, he said, as the bots need good data to operate accurately. Additionally, Abel said he advises CIOs to work with other enterprise executives to develop RPA use cases that align to business objectives so that RPA deployments have long-term value. Abel pointed to one client's experience as a cautionary tale. He said that company jumped right into RPA, deploying bots to automate various tasks. 


Libra Association members Facebook blockchain
The underlying blockchain transactional network will be able to handle thousands of transactions per second; data about the financial transactions will be kept separate from data about the social network, according to David Marcus, the former president of PayPal. He is now leading Facebook's new digital wallet division, Calibra. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent, the social network said in a statement. "This means Calibra customers' account information and financial data will not be used to improve ad targeting on the Facebook family of products," Facebook said. Calibra and its underlying blockchain distributed ledger will scale to meet the demands of "billions," Marcus said in an interview with Fox Business News this morning. Libra is different from other cryptocurrencies, such as bitcoin, in that it is backed by fiat currency, so its value is not simply determined by supply and demand. Bitcoin is "not a good medium of exchange today because [fiat] currency is actually very stable and bitcoin is volatile," Marcus said in the Fox Business News interview.


Western Digital launches open-source zettabyte storage initiative

big data / data center / server racks / storage / binary code / analytics
With this project Western Digital is targeting cloud and hyperscale providers and anyone building a large data center who has to manage a large amount of data, according to Eddie Ramirez, senior director of product marketing for Western Digital. Western Digital is changing how data is written and stored from the traditional random 4K block writes to large blocks of sequential data, like Big Data workloads and video streams, which are rapidly growing in size and use in the digital age. “We are now looking at a one-size-fits-all architecture that leaves a lot of TCO [total cost of ownership] benefits on the table if you design for a single architecture,” Ramirez said. “We are looking at workloads that don’t rely on small block randomization of data but large block sequential write in nature.” Because drives use 4k write blocks, that leads to over-provisioning of storage, especially around SSDs. This is true of consumer and enterprise SSDs alike. My 1TB SSD drive has only 930GB available. And that loss scales. An 8TB SSD has only 6.4TB available, according to Ramirez.



'Extreme But Plausible' Cyberthreats

A new report from Accenture highlights five key areas where cyberthreats in the financial services sector will evolve. Many of these threats could comingle, making them even more disruptive, says Valerie Abend, a managing director at Accenture who's one of the authors of the report. The report, "Future Cyber Theats: Extreme But Plausible Threat Scenarios in Financial Services," focuses on credential and identity theft; data theft and manipulation; destructive and disruptive malware; cyberattackers' use of emerging technologies, such as blockchain, cryptocurrency and artificial intelligence; and disinformation campaigns. In an interview with Information Security Media Group, Abend offers an example of how attackers could comingle threats. If attackers were to wage "a multistaged attack using credential theft against multiple parties that then used disruptive or destructive malware, so that they actually change the information at key points in the business process of critical financial functions ... and then used misinformation outside of that entity using various parts of social media ... they could really do some serious damage," Abend says.


How to prepare for and navigate a technology disaster

DRP, Disaster Recovery Plan
Two key developments will have the largest impact on business continuity and disaster recovery planning. The first is serverless architecture. Using this term very loosely, the adoption of these capabilities will dramatically increase application and data portability and enable workloads to be executed virtually anywhere. We're quite a bit of a way from this being the default way you build applications, but it's coming, and it's coming fast. The second is edge computing. As modern applications and business intelligence are moved to the edge, the ability to 'fail over' to additional resources will increase, minimizing (if not eliminating) real and perceived downtime. The more identical places you can run your application, the better the level of availability and performance is going to be. This definitely isn't simple, but we're seeing (and developing) applications each and every day that are built with this architecture in mind, and it's game changing for enterprise and application architecture and planning.


Q&A on the Book Risk-First Software Development

The Risk Landscape is the idea that whenever we do something to deal with one risk, what’s actually happening is that we’re going to pick up other risks as a result. For example, hiring new developers into a team might mean you can clear more Feature Risks (by building features the customers need), but it also means you’re going to pick up Coordination and Agency Risk, because of your bigger team. So, you’re moving about on a Risk Landscape, hoping to find a nice position where the risks are better for you. This first volume of Risk-First Software Development was all about that landscape, and the types of risks you’ll find on it. I am planning a second volume, which again will all be available to read on riskfirst.org. This will focus more on the tools and techniques you can use to navigate the Risk Landscape.  For example, if I have a distributed team, I might face a lot of Coordination Risk, where work is duplicated, or people step on each other’s toes. What are the techniques I can use to address that? I could introduce a chat tool like Slack, but it might end up wasting developer time and causing more Schedule Risk. 


Microservices Chassis Pattern
This is not something new. Reusability is something we learn in at the very beginning of our developer lives. This pattern cuts down on the redundancy factor and complexity across services by abstracting the common logic to a separate layer. If you have a very generic chassis, it could even be used across platforms or organizations and wouldn't need to be limited to a specific project. It depends on how you write it and what piece of logic you move to this framework. Chassis are a part your microservices infrastructure layer. You can move all sorts of connectivity, configuration, and monitoring to a base framework. ... When you start writing a new service by identifying a domain (DDD) or by identifying the functionality, you might end up writing lots of common code. As you progress and create more and more services, it could result in code duplication, or even chaos, to manage such common concerns and redundant functionalities. Moving such logic to a common place and reusing it across different services would improve the overall lifecycle of your service. You might spend some initial effort in creating this component but it will make your life easier later on.


Remember data is used for dealing with your customers, making decisions, generating reports, understanding revenue and expenditures. Everyone from the customer service team to your senior executive team use data and rely on it being good enough to use. Data governance provides the foundation so that everything else can work. This will include obvious “data” activities like master data management, business intelligence, big data analytics, machine learning and artificial intelligence. But don’t get stuck thinking only in terms of data. Lots of processes in your organization can go wrong if the data is wrong, leading to customer complaints, damaged stock, and halted production lines. Don’t limit your thinking to only data activities. If your organization is using data (and to be honest, which companies aren’t?) you need data governance. Some people may not believe that data governance is sexy, but it is important for everyone. It need not be an overly complex burden that adds controls and obstacles to getting things done. Data governance should be a practical thing, designed to proactively manage the data that is important to your organization.



A well-managed cloud storage service ties directly into the apps you use to create and edit business documents, unlocking a host of collaboration scenarios for employees in your organization and giving you robust version tracking as a side benefit. Any member of your organization can, for example, create a document (or a spreadsheet or presentation) using their office PC, and then review comments and changes from co-workers using a phone or tablet. A cloud-based file storage service also allows you to share files securely, using custom links or email, and it gives you as administrator the power to prevent people in your organization from sharing your company's secrets without permission. With the assistance of sync clients for every major desktop and mobile platform, employees have access to key work files anytime, anywhere, on any device. You might already have access to full-strength cloud collaboration features without even knowing it. If you use Microsoft's Office 365 or Google's G Suite, cloud storage isn't a separate product, it's a feature.


Boost QA velocity with incremental integration testing

There are several strategies for incremental integration testing, including bottom-up, top-down and a hybrid approach blending elements of both, as well as automation. Each method has benefits and limitations, and gets incorporated into an overall test strategy in different ways. These incremental approaches help enable shift-left testing, which means automation shapes how teams can perform the practice. ... Often, the best approach is hybrid, or sandwich, integration testing, which combines both top-down and bottom-up techniques. Hybrid integration testing exploits bottom-up and top-down during the same test cycles. Testers use both drivers and stubs in this scenario. The hybrid approach is multilayered, testing at least three levels of code at the same time. Hybrid integration testing offers the advantages of both approaches, all in support of shift left. Some of the disadvantages remain, especially as the test team must work on both drivers and stubs.



Quote for the day:


"What you do makes a difference, and you have to decide what kind of difference you want to make." -- Jane Goodall