Daily Tech Digest - August 05, 2020

Data privacy and data security are not the same

"Data privacy is, in essence, a subset of an organization's data security," Ewing said. "The distinction is important because, although the tools used to maintain data privacy and to ensure data security may overlap, the two are generally addressed differently by different teams using different tools." This overlap can cause confusion, leaving companies who focus just on data security with the false impression that, by default, data privacy also is protected. This is not the case. Unlike data security, which focuses on protecting all of an organization's data from theft or corruption (like during a ransomware attack), data privacy is more granular. To ensure data privacy, organizations must understand, track, and control things like who is authorized to access the data and where the data is stored -- in a Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud, for example. A good example of differences between data privacy and data security was the harvesting of 87 million Facebook user profiles by the now-defunct political consulting firm Cambridge Analytica during the 2016-17 US presidential election, said Joshua Kail, a communications consultant who ran agency-side PR for Cambridge Analytica until it shut down in May 2018. 


State of the Art in Automated Machine Learning

Through the years of development of the machine learning domain, we have seen that a large number of tasks around data manipulation, feature engineering, feature selection, model evaluation, hyperparameter tuning can be defined as an optimization problem and, with enough computing power, efficiently automated. We can see numerous proofs for that not only in research but also in the software industry as platform offerings or open-source libraries. All these tools use predefined methods for data processing, model training, and evaluation. The creative approach to framing problems and applying new techniques to existing problems is the one that is not likely to be replicated by machine automation, due to a large number of possible permutations, complex context, and expertise the machine lacks. As an example, look at the design of neural net architectures and their applications, a problem where the search space is so ample, where the progress is still mostly human-driven. ... In theory, the entire ML process is computationally hard. From fitting data to, say, a neural network, to hyperparameter selection, to neural architecture search (NAS), these are all hard problems in the general case. However, all of these components have been automated with varying degrees of success for specific problems thanks to a combination of algorithmic advances, computational power, and patience.


How AI is Becoming Essential to Cyber-Strategy

The problem with machine learning is that the AI is limited to the features that it has been taught to expect. Fooling a machine learning security system is as simple as adding an unexpected/ unprogrammed feature into the exploit. Imagine a card trick such as “find the lady” where the machine learning software is expecting the dealer to operate inside the given parameters (the dealer is only moving around these three cards), but the dealer is cheating by having a fourth card. Because the concept of the fourth card is outside the expected features, the program can be defeated. What artificial neural networks can do is allow an AI to self-determine what features it uses to reach a conclusion. An artificial neural network still requires some degree of human input to confirm if a conclusion is incorrect, but it effectively self-organizes how it reviews and manages the data it has access to. As an example, an AI looking for new types of viruses can sense everything happening in a computer and then identify based on everything whether a program or even an activity in the memory are doing something unwelcome. It does not need to have seen the behavior before, it only has to recognize the outcome, or potential outcome.


ICML 2020 highlights: A Transformer-based RL agent, causal ML for increased privacy, and more

Microsoft researchers are in full summer swing when it comes to advancing machine learning in accessibility, privacy, healthcare, and other areas. As Microsoft Partner Research Manager and ICML President John Langford puts it, “ICML is a very broad conference, so its specialty is in some sense ‘all of the above.’” But Langford goes on to add that one of the topics that ICML has a long track record on is currently trending: reinforcement learning. A brief glance through the sessions and workshops presented by Microsoft researchers shows the wide influence reinforcement learning has in our world today, from natural language to robotics to infrastructure considerations like transportation. Beyond the research contributions, Microsoft was also a sponsor of and recruiter at the conference. Additionally, the company sponsored two events co-located with the conference, the first Women in Machine Learning Un-Workshop and the fourth Queer in AI Workshop. The impact of the conference—now and in the future—is multifaceted, according to Langford. “ICML is ‘the’ summer machine learning conference. As such, it’s critically important to the academic discovery, review, and dissemination process, a great way to meet fellow researchers, and a natural recruiting point for the field,” he says.


An open source solution for continuous testing at scale

With recent and ongoing updates, organizations can leverage Cerberus' features from development to operations. It expands digital experience test coverage by executing tests on a variety of browsers, devices, and apps. Its native connectors for APIs (including SOAP and REST), desktop applications, and Apache Kafka enable testing legacy apps, APIs, event-driven microservices, streaming services, business intelligence, data science applications, and other use cases. During the software development lifecycle, Cerberus supports fast iterations in test management, execution, and reporting. Users can create test specifications in plain English, compose tests using a library, execute in parallel on various devices, and do advanced reporting. Native integration with CI/CD solutions, such as Jenkins, Bitbucket, and others, combined with one-click ticket creation in Jira and other tools, makes bug resolution faster and easier. Cerberus can also monitor customer experience and business operations. Tests can be functional and technical, allowing organizations to test complex scenarios. For example, France's leading TV channel, TF1, uses it for quality assurance on its streaming platform.


Retrospectives for Management Teams

Good action points are the ones that propel the team forward and make them productive; I focus on quantity, quality, and the process itself. When it comes to quantity, it’s always wise to limit our commitments in order to maximize our chance of delivering them on time. Sometimes it aches the team to let go of some great ideas and not turn them into action points after a meeting. I believe it’s our duty as facilitators to increase the likelihood of a positive impact, even if it means cutting the number of initiatives we start simultaneously. When it comes to quality, in Radical Candor Kim Scott gives an easy-to-remember recipe for action points. You need to have a one-line answer to who will do what by when? If you do not have an answer on all three aspects, you don’t have an action point after all. If you follow her lead, you get a statement that is easy to act upon, easy to check if it’s being done, and easy to communicate with your stakeholders. Regarding the process, I like to encourage people to write their action items themselves - it helps to frame them in a way they understand and find easy to act upon. It helps to remember them, too.


What is an IT director? Everything you need to know about one of the top jobs in tech

The first IT professionals were employed to help their organisations manage mainframe systems. As computers became more integral to the way we work, so technology leaders – be they IT directors or CIOs – started to be appointed. IT director was the more commonly used term initially. Through the late 1990s and into the new millennium, it became customary for the top executive in a business to take the CIO moniker. While that's still often the case, it's not a hard and fast rule – many organisations still use the IT director title to describe their most senior tech executive, or use closely related titles, such as head of IT, head of technology, vice president of IT, or VP of technology. Apart from the job title, the roles are perceived to have a subtly different focus. Many big organisations now employ a CIO and an IT director. Where both executives are in situ, a split in responsibilities is likely to occur. IT directors are more likely to ensure day-to-day technology operations meet the mark, covering areas such as system uptime, service maintenance and vendor agreements. CIOs, on the other hand, are seen as the outward face of the technology department – CIOs spend less time in the data centre and more time engaging with their business peers in an attempt to understand how technology can be used to help meet their demands.


The Age of Accelerating Strategy Breakthroughs

Leading companies are also prioritizing the need to identify threats and opportunities created by megatrends that can rapidly reshape businesses. The coronavirus pandemic has shown that negative megatrends like epidemics and climate change can no longer be treated as tail risks so extreme that no preparation would make a difference. Companies have to build up resilience to safeguard profits by being prepared to play ferocious defense against other negative megatrends gathering momentum, like public debt crises, at one end of the spectrum. At the other, they must aggressively pursue new prospects created by positive megatrends like digitalization and health and wellness. Macro shifts set off by the pandemic illustrate how quickly megatrends can force companies to reset strategies. Retailers are rerouting investments earmarked for building physical locations into upgrading online commerce features and delivery services. Financial services companies are accelerating many more digital-only offerings, such as contactless payments and risk management products such as health insurance.


How to avoid cloud vendor lock-in and take advantage of multi-vendor sourcing options

Businesses recognise the benefit of utilising different suppliers and over half are now using more than one public cloud provider. According to McQuire, the moves of major cloud providers is reflecting this trend, with the launch of products like Google Cloud’s Anthos and Big Query Omni, as well as Microsoft’s Azure Arc. “Customers and developers want depth in cloud services but don’t want to be locked into a single cloud environment. Above all, they want choice when it comes to spinning up infrastructure for new applications, lift-and shift projects or maintaining consistency across their on premises, public cloud and edge environments,” he comments. McQuire warns, however, that while the market is still very early in its transition to the cloud, “care must be taken in pursuing multi-cloud approaches, so that they are not adding even more complexity to an already highly-complicated cloud computing stack. “Whilst consistency is key in multi-cloud, there will be those that do not want a lowest common denominator approach in order to support this strategy.”


How Ransomware Threats Are Evolving & How to Spot Them

"The cleverness, the creativity, and the intimate knowledge of these very, very miniscule technical details to craft a bypass like that is almost unseen in criminal malware," says Wisniewski. "It's the kind of thing we expect to see in espionage-style attacks, not in criminal attacks." Some attackers bypass technical tools by "living off the land," or using legitimate admin tools to achieve goals. Some use software deployment tools to roll out ransomware instead of delivering patches to Windows machines, Wisniewski says as an example. They may abuse PowerShell, other Microsoft tools, or so-called "gray hat" tools like Metasploit or Cobalt Strike. This behavior isn't new, Wisniewski says. "What is new is that may be the only indication you're going to get that they're in your network." Organizations may notice small, unusual things once in a while, remedy them, and close the ticket without realizing they're part of a larger incident. By the time they do, an attacker has been in their network for weeks. WastedLocker and Maze will "sit there for a month" to figure out the thing that will shut down their enterprise victim.



Quote for the day:

"Entrepreneurs must be willing to be misunderstood for long periods of time." -- Jeff Bezos

Daily Tech Digest - August 04, 2020

Ethical AI in healthcare

In many ways these technologies are going to be shaping us even before we've answered this question. We'll wake up one morning and realize that we have been shaped. But maybe there is an opportunity for each of us, in our own settings and in conversations with our colleagues and at the dinner table, and with society, more broadly, to ask the question, What are we really working toward? What would we be willing to give up in order to realize the benefits? And can we build some consensus around that?  How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and on the other, ensure that we're continuing to care? What would that world look like? How can we maintain the reason why we came into medicine in the first place, because we care about people, how can we ensure that we don't inadvertently lose that?  The optimistic view is that, by virtue of freeing up time by moving some tasks off of clinicians’ desks, and moving the clinician away from the screen, maybe we can create space, and sustain space for caring. The hope that is often articulated is that AI will free up time, potentially, for what really matters most. That's the aspiration. But the question we need to ask ourselves is, What would be the enabling conditions for that to be realized?


Apache Cassandra’s road to the cloud

What makes the goal of open sourcing cloud-native extensions to Cassandra is emergence of Kubernetes and related technologies. The fact that all of these technologies are open source and that Kubernetes has become the de facto standard for container orchestration has made it thinkable for herds of cats to converge, at least around a common API. And enterprises embracing the cloud has created demand for something to happen, now. A cloud-native special interest group has formed within the Apache Cassandra community and is still at the early stages of scoping out the task; this is not part of the official Apache project. at least yet. Of course, the Apache Cassandra community had to get its own house in order first. As Steven J. Vaughan-Nichols recounted in his exhaustive post, Apache Cassandra 4.0 is quite definitive, not only in its feature-completeness, but also in the thoroughness with which it has fleshed out the bugs to make it production-ready. Unlike previous dot zero versions, when Cassandra 4.0 goes GA, it will be production-ready. The 4.0 release hardens the platform with faster data streaming, not only to boost replication performance between clusters, but make failover more robust. But 4.0 stopped short about anything to do with Kubernetes.


From doorbells to nuclear reactors: why focus on IoT security

An important step in network security for IoT is identifying the company’s most essential activities and putting protections around them. For manufacturing companies, the production line is the key process. Essential machinery must be segmented from other parts of the company’s internet network such as marketing, sales and accounting. For most companies, just five to 10% of operations are critical. Segmenting these assets is vital for protecting strategic operations from attacks. One of the greatest risks of the connected world is that something quite trivial, such as a cheap IoT sensor embedded in a doorbell or a fish tank, could end up having a huge impact on a business if it gets into the wrong communication flow and becomes an entry point for a cyber attack. To address these risks, segmentation should be at the heart of every company’s connected strategy. That means defining the purpose of every device and object linked to a network and setting boundaries, so it only connects to parts of the network that help it serve that purpose. With 5G, a system known as Network Slicing helps create segmentation. Network Slicing separates mobile data into different streams. Each stream is isolated from the next, so watching video could occur on a separate stream to a voice connection.


The ABCs of Data Science Algorithms

An organization’s raw data is the cornerstone of any data science strategy. Companies who have previously invested in big data often benefit from a more flexible cloud or hybrid IT infrastructure that is ready to deliver on the promise of predictive models for better decision making. Big data is the invaluable foundation of a truly data-driven enterprise. In order to deploy AI solutions, companies should consider building a data lake -- a centralized repository that allows a business to store structured and unstructured data on a large scale -- before embarking on a digital transformation roadmap. To understand the fundamental importance of a solid infrastructure, let’s compare data to oil. In this scenario, data science serves as the refinery that turns raw data into valuable information for business. Other technologies -- business intelligence dashboards and reporting tools -- benefit from big data, but data science is the key to unleashing its true value. AI and machine learning algorithms reveal correlations and dependencies in business processes that would otherwise remain hidden in the organization’s collection of raw data. Ultimately, this actionable insight is like refined oil: It is the fuel that drives innovation, optimizing resources to make the business more efficient and profitable.


Soon, your brain will be connected to a computer. Can we stop hackers breaking in?

Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data. Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user's brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts. Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users' BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.


SaaS : The Dirty Secret No Tech Company Talks About

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Why ISO 56000 Innovation Management matters to CIOs

The ISO 56000 series presents a new framework for innovation, laying out the fundamentals, structures and support that ISO leaders say is needed within an enterprise to create and sustain innovation. More specifically, the series provides guidance for organizations to understand and respond to changing conditions, to pursue new opportunities and to apply the knowledge and creativity of people within the organization and in collaboration with external interested parties, said Alice de Casanove, chairwoman of the ISO 56000 standard series and innovation director at Airbus. ISO, which started work on these standards in 2013, started publishing its guidelines last year. The ISO 56002 guide for Innovation management system and ISO 56003 Tools and methods for innovation partnership were published in 2019. ISO released its Innovation management -- Fundamentals and vocabulary in February 2020. Four additional parts of the series are forthcoming. The committee developed the innovation standards so that they'd be applicable to organizations of all types and sizes, de Casanove said. "All leaders want to move from serendipity to a structured approach to innovation management," she explained


How plans to automate coding could mean big changes ahead

Known as a "code similarity system", the principle that underpins MISIM is not new: technologies that try to determine whether a piece of code is similar to another one already exist, and are widely used by developers to gain insights from other existing programs. Facebook, for instance, uses a code recommendation system called Aroma, which, much like auto-text, recommends extensions for a snippet of code already written by engineers – based on the assumption that programmers often write code that is similar to that which has already been written. But most existing systems focus on how code is written in order to establish similarities with other programs. MISIM, on the other hand, looks at what a snippet of code intends to do, regardless of the way it is designed. This means that even if different languages, data structures and algorithms are used to perform the same computation, MISIM can still establish similarity. The tool uses a new technology called context-aware semantic structure (CASS), which lets MISIM interpret code at a higher level – not just a program's structure, but also its intent. When it is presented with code, the algorithm translates it in a form that represents what the software does, rather than how it is written; MISIM then compares the outcome it has found for the code to that of millions of other programs taken from online repositories.


RPA bots: Messy tech that might upend the software business

Where it gets interesting is that these RPA bots are basically building the infrastructure for all the other pieces to fit together such as AI, CRM, ERP and even documents. They believe in the long-heralded walled-garden approach in which enterprises choose one best-of-breed infrastructure platform like Salesforce, SAP or Oracle and build everything on top of that. History has shown that messy sometimes makes more sense. The internet did not develop from something clean and organized -- it flourished on top of TCP: a messy, inefficient and bloated protocol. Indeed, back in the early days of the internet, telecom engineers were working on an organized protocol stack called open systems interconnection that was engineered to be highly efficient. But then TCP came along as the inelegant alternative that happened to work and, more important, made it possible to add new devices that no one had planned on in the beginning. Automation Anywhere's CTO Prince Kohli said other kinds of messy technologies have followed the same path. After TCP, HTTP came along to provide a lingua franca for building web pages. Then, web developers started using it to connect applications using JavaScript object notation. 



Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang

Daily Tech Digest - August 03, 2020

When hybrid multicloud has technical advantages

Big companies can’t turn their ships fast enough, and the CIO must consider setting priorities — based on business impact and time to value — for application modernization. As Keith Townsend, co-founder of The CTO Advisor, put it on Twitter, “Will moving all of my Oracle apps to Amazon RDS net business value vs. using that talent to create new apps for different business initiatives? The problem is today, these are the same resources.” Then ask software developers, and you’ll find many prefer building applications that deploy to public clouds, and that leverage serverless architectures. They can automate application deployment with CI/CD, configure the infrastructure with IaC, and leave the low-level infrastructure support to the public cloud vendor and other cloud-native managed service providers. And will your organization be able to standardize on a single public cloud? Probably not. Acquisitions may bring in different public clouds than your standards, and many commercial applications run only on specific public clouds. Chances are, your organization is going to be multicloud even if it tries hard to avoid it. In the discussion below, we’ll examine a number of scenarios in which a hybrid cloud architecture offers technical advantages over private cloud only or multiple public clouds.


Layering domains and microservices using API Gateways

Bounded contexts are the philosophical building blocks of microservice architectures. If we want to layer our architecture, we need to layer our concepts. And as you might imagine, this is not difficult at all! We have the entire organization’s structure to be inspired, and since domain driven systems tie in very closely with how organizations are organized, there is plenty of opportunity to copy-paste. Our organization’s structure clearly tells us that a “domain” can mean very different things at different levels of abstractions. As soon as we say “abstraction”, we know that we are in a hierarchical world. If you have ever seen a junior developer try to explain a production outage to a senior manager, you know what I am talking about. The minutiae of system implementation don’t matter to the senior manager because at his level of operation, “outage due to timeout in calling payment authentication service from checkout validator service” is interpreted as “outage in checkout due to payment system”. He doesn’t care about “timeout”, “authentication”, “validator” or “service” – he cares about “checkout”, “outage”, and “payment”. The CEO doesn’t even care about “checkout” and “payment”, he probably just hears “tech” and “outage”.


5G: What Does The Future Hold? 

According to David Hardman, until 5G networks cover a much greater area of the UK, major initiatives around the technology are likely to focus more on what is possible in the future, rather than necessarily providing solutions today. “It’s a chicken and egg situation,” he says. “New, innovative products and services need to be developed in parallel with infrastructure roll-out in order to take full commercial advantage. Businesses coming through the 5G incubator, 5PRING, in the early days are likely to be larger established businesses that can plug in to what 5G currently offers. Full implementation will enable real commercial returns for these organisations, with the next wave of innovation then coming from new businesses that establish themselves when the 5G service is fully up and running.” “Although Huawei concerns and covid-19 have impacted progress, the pandemic has also woken people up to what true digital communication is about. Can you imagine what the working world would have looked like if the virus had struck 15 years ago, when none of the remote working technology was readily available? If we look forward another 10 years, the development of 5G will bring a further evolutionary step-jump in what digital has to offer in all aspects of our lives.” 


To lead in the postcrisis tomorrow, put leadership and capabilities in place today

By and large, the great remote-working experiment brought on by the crisis has shown that a lot can be accomplished, immediately and virtually, with small teams, fewer and streamlined cycles, and without so much time expended on travel. As one executive noted when talking about his company’s meetings budget: “The problem isn’t where we are doing the meeting or why, but why did we have to convene two dozen people to all get together to make the decision … instead of just three people on a disciplined conference call.” Why are we talking about speed in a discussion about investing in an organization’s capabilities? Because without equally addressing speed, an organization’s progress innovating and adapting merely grinds along. Often, counterintuitively, it may be necessary to put in some “good bureaucracy.” During the crisis, some companies have traded in traditional videoconferencing, replete with large numbers of contributors, in favor of “wartime councils” in which multiple senior stakeholders gather once to act rapidly as decision makers. Using something as simple as a two-page document, teams can cut straight to the heart of a business issue and get to yes or no quickly, often with better results. Such exercises are worth retaining and propagating.


5G unmade: The UK’s Huawei reversal splits the global telecom supply chain

It would be the quintessential catalyst for market fragmentation. This was the argument being made by every telco, every equipment producer, and every telecom analyst with whom we spoke two years ago, without exception, back when the world seemed more cooperative, and globalization was a good thing.  "What we don't want," explained Enders' James Barford, "is a situation where Huawei and ZTE work in China, Ericsson and Nokia work in the rest of the world, Samsung does a bit here and there. Ultimately, telecoms companies everywhere have reduced choice, and at the basic standards level, suppliers aren't working together. The best ideas aren't winning through. At the moment, if one of Ericsson, Huawei, and Nokia have a good idea, the others have to follow. . . to keep up. But we don't want to be in a situation where one of them has an innovation, and the rest of the world just kind of carries on. If Huawei has an innovation that makes China better, the rest of the world just misses out." Dr. Lewis chuckled a bit at this scenario. His assertion is that technology standards at all levels, but especially technology, become global by nature. Yes, countries may seek to assert some type of sovereign control over Internet traffic — besides China, Germany and Russia have also staked claims to digital sovereignty, and observers do perceive this as a global trend.


A Perfect Storm: The “New Normal” in Business and CCPA Compliance Enforcement

Privacy compliance, while something we have to do as the CCPA starts active enforcement July 1, is not just a “one and done” task—you need to scale out your privacy program to stay ahead of each new mandate and adapt to today’s evolving landscape, whether COVID or the next major unpredictable event. Simply burying one’s head in the sand in apathy has very costly consequences. Let’s have a look at data subject reporting: While data protection is a critical aspect of avoiding a data breach or misuse, there is also a real cost in handling data subject rights requests from your loyal customers as the CCPA begins enforcement and the GDPR continues on. And this requires transparency into data access and use across your organization. A major industry analyst firm points out in a survey last year on the GDPR that this activity can represent a potential outlay of $1,406 per request to handle inquiries manually, on a case-by-case basis. Without an automated approach to privacy compliance, the costs to manage data subject requests at scale can quickly overwhelm unprepared organizations. And to do that, you’ll need to take advantage of automation and AI to find customer data across your organization and report on its use, or risk privacy regulatory violations with fines and brand reputation at stake.


Is a lack of governance hindering your cloud workload protection?

As operations in the cloud grow together with the teams managing them, company-wide visibility and accountability become critical issues. After all, you can’t accurately detect, stop or respond to something if you can’t see it. In this way, workload events need to be captured, analysed and stored so that security teams can enjoy the visibility they need to detect and stop threats in real-time, as well as to hunt down and investigate threats. Accountability is a critical concern for information security in cloud computing, representing most importantly the trust in service relationships between clients and cloud providers (Microsoft Azure et al). Indeed, without evidence of accountability, a lack of trust and confidence in cloud computing can raise it’s head among those concerned with managing the business. Sensitive data (PII) is processed in the cloud and governance is critical to make sure that such data is always processed and stored in a secure manner. Data protection is big news these days – especially more so with the advent of both PII and General Privacy Data Regulation (GDPR) data compliance regulations. The shared responsibility model between the cloud platform provider you choose and your organisation, means that you (the organisation) remain responsible for the protection and security of any sensitive data from your end customers.


Not seeing automation success? Think like a pilot

Figuring out what processes need to be automated is one thing. Managing them from then on is a whole new ball game – and one that will require constant attention. After all, autopilot only kicks in once the plane is successfully cruising. Process mining technologies help you analyse and discover processes using your business’ data, but process intelligence goes several steps further. This offers the deep understanding and real-time monitoring of your processes that many businesses are missing. Then, it can drill down into the granular details, explain why processes don’t work and how to fix them, and give you the tools to solve problems you didn’t even know existed. It’s vital that business leaders check in on their processes often during this phase, to see where issues lie, which processes are most problematic, and which are ripe for automation. Once this is in good shape, you can move on to intelligent automation – combining process intelligence with automation like RPA. This is the switch to autopilot. Here, the technology can spot potential issues with processes like bottlenecks or delays before they happen, and update bots with corrective actions to fix the failing process. 


What are script-based attacks and what can be done to prevent them?

The use of scripts poses many advantages to the attacker: scripts are easy to write and execute, trivial to obfuscate, and extremely polymorphic. Moreover, attackers can use many types of script files to carry out an attack – the most popular being PowerShell, JavaScript, HTA, VBA, VBS, and batch scripts. Since fileless attacks occur in memory, traditional static file detection is rendered useless. Furthermore, scripts complicate post-event analysis since many artifacts related to the attack only exist in the computer’s memory and may be overwritten or removed through a reboot, for example. In-memory detection and artifact collection are possible through the use of heuristics and behavioral analysis, which can detect malicious in-memory activities. Script-based attacks run on virtually all Windows systems, increasing the potential attack surface and the chance of infection. One major drawback of script-based attacks is that, unless deployed via an exploit, user interaction is required for the script to run. For example, in most cases, the script is contained either as a script file within an email requiring user action or as a VBA macro in a document that requires the user to enable macros.


The Illusion of Digital Transformation in Banking

There are strong indications that leadership lacks experience in implementing such massive transformations. This has resulted in a prioritization of technology being purchased that may only scratch the surface of needed transformation. For instance, purchasing a new mobile banking platform is only as good as the underlying processes that also must be changed to improve the overall digital banking customer experience. It also appears that the current financial strength of the industry is resulting in complacency around making large, overarching changes to what has long been the operating norm in banking. But the challenges don’t end there. On the not-too-distant horizon, banks and credit unions will need to address a digital skills shortage and the internal culture shift requisite to facilitate needed innovation and transformation. ... The organizations with the greatest digital transformation maturity tend to be upgrading the most number of digital technologies. In most cases, the prioritization is determined by a mix of business requirements, cost, ease (or difficulty) of transformation, and skills available either internally or through partners. Organizations with the highest digital transformation maturity have also made progress on implementing the more sophisticated technologies. These include artificial intelligence (AI), robotic process automation (RPA), cloud computing, the Internet of Things (IoT), and blockchain solutions.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract

Daily Tech Digest - August 02, 2020

Test Automation Best Practices

Designing tests and test data is the most crucial and time-consuming portion of the testing process. To be valid, test design must be precise in indicating the software functionalities to be tested. During the design phase, test conditions are identified based on specified test requirements, effective test modules and metrics are developed, and the anticipated behavior that will yield valid results is determined. Automated testing performs evaluations against manual test requirements to verify the reliability of the automated process. The use of an automation framework to configure testing modules characterizes automated testing. The automated framework supports the development of automated test scripts while it also monitors and maintains test results and related documentation. The structural framework for an automated test suite is the structural foundation of automated testing. Automation best focuses on identified priority factors for deployment. Manual testing can precede automated testing to contribute test conditions and data that test automation can use for regression and other types of testing.


Winning in Digital Innovation: Turning Scale and Legacy into Strengths

Over the past few years, disruptive forces have hit industry after industry. Travel has been disrupted by Priceline, Expedia, TripAdvisor, and Airbnb, transportation by Uber, and retail by Amazon and Alibaba. For established businesses, the most disruptive threats tend to come from outside traditional competition. New companies not only spot opportunities to create value that many incumbents fail to see, they also tend to operate with different business models. In fact, it’s no longer about having a level playing field. The disruptors are playing an entirely new game. Google is a master of this new game, converting an array of industries into advertising revenue. Amazon is another serial disruptor with its Amazon Prime now in a two-horse race with Netflix— undermining the model of traditional broadcast industries. Even those that have not yet been significantly impacted by these forces are not safe. Over the next five years, 40 percent of companies will face some form of digital disruption, according to Forbes magazine. Artificial intelligence is beginning to attack knowledge-based industries previously seen as safe from disruption, thanks in large part to companies such as Google and Amazon offering “AI on tap.”


How Payments Fintech Is Using Banking As A Service To Drive Growth

There are two core challenges that Banking as a Service helps an international payments company overcome. The first is the need for a regulated entity to be involved when it comes to offering many core banking type services such as checking accounts or savings and lending products. The second is that the technology requirements and capabilities to offer these products such as maintaining account ledgers for customer accounts are very different to those of core payments services. Obtaining the necessary regulatory licenses and building the technology can be two of the most expensive cost items for a financial services company. Banking as a Service exists to reduce both the time and cost spent Fintechs spend on these two items allowing to focus on their core businesses.  And for cross-border payments companies or Fintechs with international ambitions, a whole additional level of complexity comes by adding a geographic dimension. Regulations and technologies are very different country to country worldwide which means more time and more cost. We spoke with the CEOs and senior management of various Banking as a Service companies in the UK and US to understand what is driving the growth in Banking as a Service.


Here’s why IT departments need predictive analytics

AI-based detection platforms are capable of monitoring IT systems in real-time, checking for early signs of potential failures. To take one example, my company Appnomic has managed to handle 250,000 severe IT incidents for our clients with AI, which equals more than 850,000 man-hours of work. By harnessing machine learning, such platforms can use past data to learn how problems typically develop, enabling a company to step in before anything unfortunate occurs. In 2017, Gartner coined the term “artificial intelligence systems for IT operations” (AIOps) to describe this kind of AI-driven predictive analysis, and the market research firm believes that the use of AIOps will grow considerably over the next few years. In 2018, only 5 percent of large enterprises are using AIOps, but the firm estimates that by 2023 this figure is set to rise to 30 percent. This growth will be driven by the fact that several benefits come from the application of machine learning and data science to IT systems. Aside from detecting likely problems before they occur, AI can significantly reduce false alarms, in that it can gain a more reliable grasp of what actually leads to failures than previous technologies and human operators.


The Garmin Hack Was a Warning

Recent victims include not just Garmin but Travelex, an international currency exchange company, which ransomware hackers successfully hit on New Year’s Eve last year. Cloud service provider Blackbaud—relatively low-profile, but a $3.1 billion market cap—disclosed that it paid a ransom to prevent customer data from leaking after an attack in May. And those are just the cases that go public. “There are certainly rather large organizations that you are not hearing about who have been impacted,” says Kimberly Goody, senior manager of analysis at security firm FireEye. “Maybe you don’t hear about that because they choose to pay or because it doesn’t necessarily impact consumers in a way it would be obvious something is wrong.” Bigger companies make attractive ransomware targets for self-evident reasons. “They’re well-insured and can afford to pay a lot more than your little local grocery store,” says Brett Callow, a threat analyst at antivirus company Emsisoft. But ransomware attackers are also opportunistic, and a poorly secured health care system or city—neither of which can tolerate prolonged downtime—has long offered better odds for a payday than corporations that can afford to lock things down.


Facebook’s newest proof-of-concept VR headset looks like a pair of sunglasses

The proof-of-concept glasses aren’t just thin for looks, though — they also apparently beam images to your eyes in a way that’s different than standard VR headsets on the market today. I’ll let Facebook’s research team explain one of those techniques, called “holographic optics:” Most VR displays share a common viewing optic: a simple refractive lens composed of a thick, curved piece or glass or plastic. We propose replacing this bulky element with holographic optics. You may be familiar with holographic images seen at a science museum or on your credit card, which appear to be three-dimensional with realistic depth in or out of the page. Like these holographic images, our holographic optics are a recording of the interaction of laser light with objects, but in this case the object is a lens rather than a 3D scene. The result is a dramatic reduction in thickness and weight: The holographic optic bends light like a lens but looks like a thin, transparent sticker. The proof-of-concept headset also uses a technique Facebook calls “polarization-based optical folding” to help reduce the amount of space between the actual display and the lens that focuses the image.


Regulatory Uncertainty Greatest Problem For Blockchain Entrepreneurs, Says Producer

A regulatory environment characterized by widespread uncertainty is the single biggest challenge facing entrepreneurs in the digital currency and blockchain industry, according to J.D. Seraphine, who produced the docuseries “Open Source Money.” ... The U.S. government has had an overall uneven approach to regulating digital currencies and blockchain. It is a fairly new and complex technology so part of that is attributed to a learning curve for regulators and government officials. There are also multiple agencies who have claimed jurisdiction over the regulation of digital assets each classifying them differently, making it very difficult for companies to know how to operate in this industry in the U.S. The industry needs clear regulations and rules or for the government to step back completely like they did with the early days of the internet. I believe this gray area of uncertainty is the worst thing for entrepreneurs and companies attempting to operate here, and it has led to other countries moving ahead of the U.S. in pioneering what many are calling the most important technology since the creation of the internet.


Black Hat Virtually: An Important Time to Come Together as a Community

What concerns me the most about the moment we're in right now is that the bad actors are getting more sophisticated by the day. The simple attacks don't work as often anymore. I've seen this script numerous times in the course of my career when I look at the work our research teams publish. What worked six months ago may not work now. The only way we can fight back against a more sophisticated opponent is through knowledge-sharing and collective protection, both formal and informal. I'm grateful that the Black Hat community is there to swap war stories of how we've succeeded — and failed — against adversaries. Those conversations, even digitally, will make the difference. Cybersecurity is a team sport. The conversations that the cybersecurity community will have at this year's Black Hat (and at the subsequent DEF CON) will be instrumental in shaping how we all respond going forward as the world has changed. It's our responsibility, as a security community, to take this digital conference just as seriously as we would take an in-person one. 


Does Ethical use of AI is the only way forward!

Companies across the world are spending a lot of time and money in AI. The experts are doing a lot of research to Java develop high quality and extremely useful AI-based tools. AI is surely quite popular and soon, it will turn out to be quite popular. But, do you know why and how it should be used mostly? Are we only looking at the ethical uses of AI? Is anyone trying to make something nontechnical using AI as well? Sometimes, Artificial Intelligence is considered a bit overhyped. Although, it is not. And, we have been reading about some dangers of AI as well in the recent past. However, AI has mostly turned out to be useful for humans, but, the fact that AI will be mimicking human intelligence, thus, there is some bit of risk involved as well. Though AI is most useful, it can only be considered not very useful only when humans find it difficult to understand how to use it and make the most of it. Also, the intentions of the people who are using have to be good. AI itself is not harmful, but the users have to make sure that AI tools are used rightly. Artificial Intelligence causes a bit of worry for humans too.


Enterprise architecture heats up to meet changing needs

Skills is definitely one of the biggest challenges at the moment. Most people are making the decision to expand their EA, or start an EA if they haven't had one, and they just move in people from one box to another. Just because you can code software doesn't mean you can think like an architect. If you are a systems engineer, you know the processes and systems, but it doesn't mean you can do capability modeling and things like that. When it comes to tools, one of the biggest barriers to EAs moving forward is ROI. The reason it's hard to come up with an ROI is because people don't do activity-based accounting. They don't identify how long they spend doing all of their tasks. If they had that information, they could say, 'I can save this amount of money if I automate these things.' The other big barrier is that people on the business side are now tech-savvy, and they question the need for EA. They don't want EAs telling them to use certain technology. A lot of the business [leaders] are now thinking, 'IT is just a cost center. I want [IT] to be an order taker.' 




Quote for the day:

"And no heart has ever suffered when it goes in search of its dream." -- Paulo Coelho

Daily Tech Digest - August 01, 2020

Top 6 cybersecurity trends to watch for at Black Hat USA 2020

Tom Kellermann, head of cybersecurity strategy atVMware Carbon Black, said, "Black Hat USA 2020 will highlight the dramatic surge and increased sophistication of cyberattacks amid COVID-19. A recent VMware Carbon Black report found that from the beginning of February to the end of April 2020, attacks targeting the financial sector have grown by 238%. Cybercriminals are also preying on the virtual workforce, the mass shift to remote work has sparked increasingly punitive attacks. Malicious actors have set their sights on commandeering digital transformation efforts to attack the customers of organization. These burglaries have escalated to a home invasion, with destructive attacks exploding to a 102% increase with the use of NOTPetya style ransomware and wipers. Spear phishing is no longer the primary attack vector, rather OS vulnerabilities, application exploitation, RDP open to the internet, and island hopping have risen to the top." Code42 CISO and CIO Jadee Hanson, said, "Top of mind for me is how the mental and emotional wellbeing of our workforce during the pandemic is impacting people's work and behavior and, as a result, their risk profiles. Businesses need to have a strong pulse on how their employees are doing. 


Metadata Repository Basics: From Database to Data Architecture

While knowledge graphs have shown potential for the metadata repository to find relationship patterns among large amounts of information, some businesses want more from a metadata repository. Streaming data ingested into databases from social media and IoT sensors, also need to be described. According to a New Stack survey of 800 professionals developers, real-time data use has seen a significant increase. What does this mean for the metadata repository? Enterprises want metadata to show the who, what, why, when, and how of their data. The centralized metadata repository database answers these questions but remains too slow and cumbersome to handle large amounts of light-speed metadata. Knowledge graphs have the advantage of dealing with lots of data and quickly. However, knowledge graphs display only specific types of patterns in their metadata repository. Companies need another metadata repository tool. Here comes the data catalog, a metadata repository informing consumers what data lives in data systems and the context of this data. Automation and discovery make the data catalog attractive by ensuring it keeps up with fast-moving data and its changes. Business and technical users can easily query the data catalog.


Confidential Computing Will Revolutionize The Internet Of Things

Confidential computing is all about trust. Developers in this field are seeking to accelerate the adoption of what is known as  “Trusted Execution Environment” (TEE) technologies. A TEE sequesters code and data away from applications on the main operating system in order to keep them away from adversaries who may gain access to the main operating system. Or, to use an analogy from this article, if the main system is in the White House, for instance, with a variety of protections, a TEE is the bunker underneath it. Within any of these bunkers, only those entities authorized by the actual data owner can view or alter the data. This enables all sorts of applications to operate efficiently without ever needing to have direct access to data. This goes beyond the better-known technique of anonymizing data, which just removes personal identifiers from a database. While anonymization protects privacy, it limits the usefulness of the data, whereas confidential computing secures data even as it is in use, allowing for wider application. Confidential computing protects encrypted software code and data from malicious administrators and hackers in public clouds; protects sensitive machine-learning models and enables privacy-preserving data analytics;


Technical Challenges of IoT Cybersecurity in a Post-COVID-19 World

For manufacturers of connected IoT products, it is key to focus on their supply chain and increase the ability to break their products down into their respective components. Effective management of vulnerabilities can be done only when information about supply chain dependencies is accurate and recent. A second side effect of the pandemic is the massively increased reliance on cloud-based communication systems. It is unthinkable to conduct business effectively and in compliance with the current legal restrictions without holding a videoconference, sharing a document, or presenting a slide set remotely. The systems used to perform those tasks, however, are largely following the same basic principles that typical client-server architectures have been following for roughly 20 years. While the cryptographic transport protocols have improved significantly since SSLv2, there still is a disparity in the level of trust between client and server: Clients are typically considered entirely untrusted while servers hold all the secrets and relay data securely. While this is easiest for the implementors of backend infrastructure, such a design is something which is fundamentally unpleasant from a security point of view.


Industrial robots could 'eat metal' to power themselves

The researchers' vision for a "metal-air scavenger" could solve one of the quandaries of future IoT-enabled factories. That quandary is how to power a device that moves without adding mass and weight, as one does by adding bulky batteries. The answer, according to the University of Pennsylvania researchers, is to try to electromechanically forage for energy from the metal surfaces that a robot or IoT device traverses, thus converting material garnered, using a chemical reaction, into power. "Robots and electronics [would] extract energy from large volumes of energy dense material without having to carry the material on-board," the researchers say in a paper they've published in ACS Energy Letters. It would be like "eating metal, breaking down its chemical bonds for energy like humans do with food." Batteries work by repeatedly breaking and creating chemical bonds. The research references the dichotomy between computing and power storage. Computing is well suited to miniaturization, and processers have been progressively reduced in size while performance has increased, but battery storage hasn't.


IoT Automation Trend Rides Next Wave of Machine Learning, Big Data

In particular, automated discovery of IoT environments for cybersecurity purposes has been an ongoing driver of IoT automation. That is simply because there is too much machine information to manually track, according to Lerry Wilson, senior director for innovation and digital ecosystems at Splunk. The target is anomalies found in data stream patterns. “Anomalous behavior starts to trickle into the environment, and there’s too much for humans to do,” Wilson said. And, while much of this still requires a human somewhere “in the loop,” the role of automation continues to grow. Wilson said Splunk, which focuses on integrating a breadth of machine data, has worked with partners to ensure incoming data can now kick off useful functions in real time. These kinds of efforts are central to emerging information technology/operations technology (IT/OT) integration. This, along with machine learning (ML), promises increased automation of business workflows. “Today, we and our partners are creating machine learning that will automatically set up a work order – people don’t have to [manually] enter that anymore,” he said


Putting AI And Machine Learning To Work In Cloud-Based BI And Analytics

By leveraging modern technology to automate data lake migration and replication to the cloud with WANdisco LiveData Cloud Services through its patented Distributed Coordination Engine platform. This innovation is founded on fundamental IP which is based around forming consensus in a distributed network. This is an extremely hard problem to solve and to this day some people believe that it cannot be solved. So what is this problem at a high level? If you have a network of nodes, distributed across the world with little to no knowledge of the distance and bandwidth between the nodes, how can you get the nodes to coordinate between each other without worrying about any failure scenarios? The solution is the application of a consensus algorithm and the gold standard in consensus is an algorithm called Paxos. Our chief Scientist Dr. Yeturu Aahlad, an expert in distributed systems, devised the first, and even now only, commercialised version of Paxos. By doing so, he solved a problem that had been puzzling computer scientists for years. WANdisco’s LiveData Cloud Services are based on this core IP including our products focused on analytical data and the challenge of migrating this data to the cloud and keeping the data consistent in multiple locations.


Breach of high-profile Twitter accounts caused by phone spear phishing attack

Whatever specific spear phishing method was used in the breach, clearly the attackers relied on a combination of technical skills and social engineering know-how to be able to convince employees into sharing their account credentials. Of course, that's the M.O. for many phishing attacks and other types of malicious campaigns. "This attack relied on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems." Twitter acknowledged. "This was a striking reminder of how important each person on our team is in protecting our service." Other than training employees through phishing simulations and similar methods, trying to correct human behavior is always challenging. That's why socially engineered attacks are often successful. "This incident demonstrates that social engineering is still a common method for attackers to gain access to internal systems," Ray Kelly, principal security engineer at WhiteHat Security, told TechRepublic. "The human is often times the weakest link in any security chain.


Meeting the Demand: Containerizing Master Data Management in the Cloud

Containerizing MDM as a PaaS offering is essential to realizing the flexibility for which the cloud is renowned. Although this capability becomes redoubled with Dockers or Kubernetes orchestration platforms, containers themselves “reduce the disruption of the architecture of the platform and provide more portability and flexibility for customers,” Melcher remarked. “What a container really is is kind of a preconfigured application, if you will.” These lightweight repositories include everything to deploy apps. Without them, MDM as a native PaaS offering increases the propensity for vendor lock-in per cloud provider, and all but eliminates on-premise hybrid clouds. The speed and ease of containerizing MDM services lets customers “spin the platform up in a matter of minutes without downloading installation and configuration guides, spinning up a Windows server, or loading up a bunch of pre-requisites,” Melcher mentioned. “All of that sort of tribal type knowledge that customers have had to historically take on when they buy an application goes away.”


The Role of Augmented Data Management in the Workplace

The negative aspects of the current situation with COVID-19 and the associated economic downturn have presented a shift in the workplace, which is driving more opportunities for growth, greater visibility into the B2B buying process, and ensuring quality customer experience throughout the buying cycle, according to Rashmi Vittal, CMO at Foster City, Calif.-based Conversica. “This change introduces intelligent automation into the workplace, something we refer to as the Augmented Workforce,” she said. An Augmented Workforce describes a workplace where business professionals work alongside artificial intelligence to drive better business outcomes. One such AI-driven technology making dramatic changes for customer-facing teams, including sales, marketing, and customer success, is an intelligent virtual assistant (IVA). IVAs in turn accelerate revenue across the customer journey by autonomously engaging contacts, prospects, customers, or partners in human-like, two-way interactions at scale, to drive towards the next best action.



Quote for the day:

"It is amazing what you can accomplish if you do not care who gets the credit." -- Harry S. Truman