Daily Tech Digest - June 02, 2020

Big GDPR Fines in UK and Ireland: What's the Holdup?

"Although the impact of COVID-19 may explain some of the current, continued delay, quite why what may end up being over a year to resolve these matters since the ICO announced its intentions to fine may leave some wondering whether GDPR enforcement is going as quickly as it should," he says. "In addition, what was also expected to be a showcase for the first significant fines under GDPR in the U.K. may now be a letdown." But Brian Honan, who heads Dublin-based cybersecurity consultancy BH Consulting, says that seeing an extended legal process isn't surprising, especially because GDPR enforcement norms have yet to be set. "The regulator, be that the ICO or any other regulator, has to ensure their case is a legally watertight as it can be before issuing a fine or a penalty. This is very important as organizations, particularly large ones with deep legal resources, will no doubt challenge any penalties imposed on them," he says. "The BA and Marriott cases are a prime example of this," says Honan, who's also a cybersecurity adviser to Europol, the EU's law enforcement intelligence agency. "We also have to take into account many of the regulators have limited resources, and their staff have to ensure they support the rights of all data subjects as best they can."


How to set up a chaos engineering game day

It isn't easy to run a chaos engineering game day. Nonetheless, it should be both fun and instructive. Manifold has hosted several styles of chaos engineering game days. Examples include 30-minute tabletop events as well as multi-hour active failure events that involve the full engineering team. A recent offsite Manifold event involved dice rolls, character classes and prizes for surviving the chaos incident. To maintain a chaos engineering program, employees must enjoy the challenge. "Uncontrolled chaos will happen to your system -- save your seriousness for that," said James Bowes, CTO of Manifold. Role-playing game days are a great way to keep it interesting. With each chaos engineering game day, the organization should build up its resistance to digital failure. "As you proceed, and if you are successful, it should become more difficult to find parts of the system to break," Bowes said. Let the participants know that the goal is to find problems; if they break something, consider that a success. But keep other teams and stakeholders informed.


It’s Time to Rethink Leadership Around Leading for Resilience

If you lead with the assumption that something somewhere and at some time will jump out and attack, you naturally prepare to defend yourself. This preparation doesn’t distract you from moving forward, but it does prove critical when you need to protect yourself. If your entire supply chain is dependent upon the ongoing support of unfriendly or at least unaligned actors and subject to pendulum swings in the political environment, you diversify the supply chain risk. By the same token, minimizing business model risk by diversifying channels is essential. Moving forward, expect every restaurant and food-service operator that is interested in surviving and thriving to develop robust online and takeout systems and internal processes. I’ve lost interest or empathy for the old-line retailers of my childhood now teetering on the brink of the abyss. They’ve had more than two decades to reset for resilience and diversify their business models, develop new channels, embrace technology, and make themselves relevant to consumers. A few have pulled this off and merit kudos. The rest will likely soon join the growing heap of old brands that will be lost to memory in a few short years.


Work in a COVID-19 world: Back to the office won’t mean back to normal

We’re now able to say, “Okay, what might be the new normal beyond this?” We recognize that there will be re-integration back into our worksites done in the current COVID-19 environment. But beyond COVID, post-vaccines, as we think about our business continuity going forward, I do think that we will be moving into, very purposefully, a more hybrid work arrangement. That means new, innovative, in-office opportunities because we still want people to be working face-to-face and have those in-person sort of collisions, as we call them. Those you can’t do at all or they are harder to do on videoconferencing. But there can be a new balance between in-office and remote work -- and fine-tuning our own practices – that will enable us to be as effective as possible in both environments. So, no doubt, we have already started to undertake that as a post-COVID approach. We are asking what it will look like for us, and then how do we then make sure from a philosophical and a strategy perspective that the right practices are put into place to enable it.


Cloud infrastructure operators should quickly patch VMware Cloud Director flaw

The reason the flaw has not been rated critical is likely because attackers technically need authenticated access to VMware Cloud Director to exploit it. However, according to Citadelo's Zatko, that's not hard to achieve in practice since most cloud providers offer trial accounts to potential customers that involve access to the Cloud Director interface. In most cases there is no real identity verification either for such accounts, so attackers can gain easy access without providing their real identities. This highlights a larger issue with assessing risk based only on vulnerability scores: Severity scores don't always reflect or take into account the real-world conditions in which vulnerable systems might typically exist. Certain configuration or deployment choices can make a vulnerability much easier or harder to exploit than the advisory or the CVSS score suggests. Zatko is concerned that VMware Cloud Director users did not take the issue too seriously based on the advisory alone. More than two weeks after the patches had already been out, his company tested another Fortune 500 organization that used the product and it was still vulnerable.


OpenAI Announces GPT-3 AI Language Model with 175 Billion Parameters

OpenAI made headlines last year with GPT-2 and their decision not to release the 1.5 billion parameter version of the trained model due to "concerns about malicious applications of the technology." GPT-2 is one of many large-scale NLP models based on the Transformer architecture. These models are pre-trained on large text corpora, such as the contents Wikipedia, using self-supervised learning. In this scenario, instead of using a dataset containing inputs paired with expected outputs, the model is given a sequence of text with words "masked" and it must learn to predict the masked words based on the surrounding context. After this pre-training, the models are then fine-tuned with a labelled benchmark dataset for a particular NLP task, such as question-answering. However, researchers have found that the pre-trained models perform fairly well even without fine-tuning, especially for large models pre-trained on large datasets. Earlier this year, OpenAI published a paper postulating several "laws of scaling" for Transformer models.


10 open source cloud security tools to know

PacBot, also known as Policy as Code Bot, is a compliance monitoring platform. You implement your compliance policies as code, and PacBot checks your resources and assets against those policies.You can use PacBot to automatically create compliance reports and resolve compliance violations with predefined fixes. Use the Asset Group feature to organize your resources within the PacBot UI dashboard, based on certain criteria. For example, you can group all your Amazon EC2 instances by state -- such as pending, running or shutting down -- and view them together. You can also limit the scope of a monitoring action to one asset group, for more targeted compliance. PacBot was created by T-Mobile, which continues to maintain it.It can be used with AWS and Azure. ... Pacu is a penetration testing toolkit for AWS environments. It provides a red team a series of attack modules that aim to compromise EC2 instances, test S3 bucket configurations, disrupt monitoring capabilities and more. The toolkit currently has 36 plugin modules and includes built-in attack auditing for documentation and test timeline purposes. Pacu is written in Python and maintained by Rhino Security Labs, a penetration testing provider.



NIS security regulations proving effective, but more work to do

The government said it now plans to make some technical changes to the regulatory regime to ensure it remains proportionate and targeted and will be considering a number of amendments to be taken up. These changes are likely to centre on cost recovery, to better enable competent authorities to conduct regulatory activity; the implantation of an improved appeals mechanism; more clarity around the wider enforcement regime; the introduction of support to manage risks to organisational supply chains; the introduction of best-practice sharing; and a number of measures to account for any changes that may be needed, or may become possible, after the end of the Brexit transition period. Kuan Hon, a director in the technical team at law firm Fieldfisher, said that based on the statistics presented in the report, there had clearly been very limited enforcement of the NIS regulations so far, with no fines having been levied, and fewer incidents reported to regulators than DCMS anticipated. However, she added, compliance and incident reporting costs had been much higher than first expected.


Cisco takes aim at supporting SASE

Reed stated that secure access and optimal performance are a must. “The rapid adoption of SD-WAN for connecting to multi-cloud applications provides enterprises with the opportunity to rethink how access and security are managed from campus to cloud to edge,” he stated. “With 60% of organizations expecting the majority of applications to be in the cloud by 2021 and over 50% of the workforce to be operating remotely, new networking and security models such SASE offer a new way to manage the new normal.” According to Reed, the goal of SASE is to provide secure access to applications and data from on-premises data centers or cloud platforms, with access determined by identities that are defined by combinations of characteristics including individuals, groups, locations, devices, and services. Service edge refers to global points of presence (PoP), IaaS, or colocation facilities where local traffic from branches and endpoints is secured and forwarded to the appropriate destination without first traveling through corporate data centers. By delivering security and networking services together from the cloud, organizations will be able to securely connect any user or device to any application and optimize user experience, Reed stated.


Causes of Memory Leaks in JavaScript and How to Avoid Them

The fastest way for a memory usage check is to take a look at the browser Task Managers (not to be confused with the operating system's Task Manager). They provide us with an overview of all tabs and processes currently running in the browser. Chrome's Task Manager can be accessed by pressing Shift+Esc on Linux and Windows, while the one built into Firefox by typing about:performance in the address bar. Among other things, they allow us to see the JavaScript memory footprint of each tab. If our site is just sitting there and doing nothing, but yet, the JavaScript memory usage is gradually increasing, there’s a good chance we have a memory leak going on. Developer Tools are providing more advanced memory management methods. By recording in Chrome's Performance tool, we can visually analyze the performance of a page as it's running. Some patterns are typical for memory leaks, like the pattern of increasing heap memory use shown below. Other than that, both Chrome and Firefox Developer Tools have excellent possibilities to further explore memory usage with the help of the Memory tool. 



Quote for the day:

"Leadership is a process of mutual stimulation which by the interplay of individual differences controls human energy in the pursuit of a common goal." -- P. Pigo

Daily Tech Digest - June 01, 2020

The Cybersecurity Implications of 5G Technology

Since one of the chief benefits envisioned for 5G is the ability to connect more and more devices to the IoT, this “also increases the threat vectors for hackers,” according to HackerNoon.com. Another potential “worst-case scenario” outlined by HackerNoon: “Faster networks can also mean faster ways for viruses and malware to spread. If more users are on the network, then you also have the potential for more infected devices and systems than ever before.” Commenting on the concern that a greatly expanded IoT multiplies the potential points of entry for cyberattacks in an article titled “5G Dangers: What are the Cybersecurity Implications?” Heimdal Security notes that, “5G technology could also lead to botnet attacks, which will spread at a much higher speed than the current networks allow it.” Of particular relevance to the cybersecurity community, the dawn of the 5G era demands that new and improved defenses and cybersecurity protocols be developed and put in place to counter the potential risks. This means the current and future work of many cybersecurity professionals will be inextricably connected to understanding and defending against the new security risks, both known and unknown, posed by this rapidly emerging technological breakthrough.


Quantum AI is still years from enterprise prime time

For quantum AI to mature into a robust enterprise technology, there will need to be a dominant framework for developing, training, and deploying these applications. Google’s TensorFlow Quantum is an odds-on favorite in that regard. Announced this past March, TensorFlow Quantum is a new software-only stack that extends the widely adopted TensorFlow open source AI library and modeling framework. TensorFlow Quantum brings support for a wide range of quantum computing platforms into one of the dominant modeling frameworks used by today’s AI professionals. Developed by Google’s X R&D unit, it enables data scientists to use Python code to develop quantum ML and DL models through standard Keras functions. It also provides a library of quantum circuit simulators and quantum computing primitives that are compatible with existing TensorFlow APIs. Developers can use TensorFlow Quantum for supervised learning on such AI use cases as quantum classification, quantum control, and quantum approximate optimization. They can execute advanced quantum learning tasks such as meta-learning, Hamiltonian learning, and sampling thermal states.


How managed threat hunting helps bust malicious insiders

Alicia first observed an employee apparently hacking their own laptop in order to obtain local admin credentials. This was done using a technique known as sticky keys, actually an accessibility feature built into Windows that can be launched with a specific key combination from the login screen. “Although the technique is referred to as sticky keys, it is actually referring to exploiting the way certain versions of Windows will execute applications designed for accessibility features,” said Lee. “In vulnerable versions of Windows, when these accessibility features are launched via a set of key combinations (shift five times for sticky keys, press ‘Windows+U’ for Windows Utility Manager, etc.), Windows will simply launch the associated application from a hardcoded path in a privileged state. “The adversary exploiting this feature can simply replace the application binary with one of their choosing. As long as the filepath and filename are the specified ones for the shortcut key combination, Windows will execute it. This technique is fairly well-known as a way to recover Windows passwords and has been used by adversaries in the past.”


What is edge computing? Here's why the edge matters and where it's headed

In a modern communications network designed for use at the edge — for example, a 5G wireless network — there are two possible strategies at work: Data streams, audio, and video may be received faster and with fewer pauses (preferably none at all) when servers are separated from their users by a minimum of intermediate routing points, or "hops." Content delivery networks (CDN) from providers such as Akamai, Cloudflare, and NTT Communications and are built around this strategy; and Applications may be expedited when their processors are stationed closer to where the data is collected. This is especially true for applications for logistics and large-scale manufacturing, as well as for the Internet of Things (IoT) where sensors or data collecting devices are numerous and highly distributed. Depending on the application, when either or both edge strategies are employed, these servers may actually end up on one end of the network or the other. Because the Internet isn't built like the old telephone network, "closer" in terms of routing expediency is not necessarily closer in geographical distance. 


Public speaking for technical pros: How to deliver a great in-person or virtual presentation

There's standing up at stand up, there's doing an all hands demo, then there's doing a small meetup, there is doing a small conference, multi-speaker small talk at a multi-track conference. There's doing a talk and a single track conference. There's this whole escalation and a lot of the levels above meetup are not a different skillset, but a skillset that you would need to focus on and work on. You have to learn to do a CFP, you have to learn to put together a slide deck. You have to learn to, there's a whole bunch of stuff around that. And so that's sort of a separate question, but I think to start out, the things that you need to understand are that everybody in the audience is on your side. A lot of people give this really old speaking advice about imagine your audience naked and then you don't respect them anymore. And I think that's terrible on several levels. Please don't imagine anybody naked. What I want you to do is imagine that they are sitting in this meeting because they want to hear from you. They want you to succeed and if you have a problem, they are empathizing with the problem.


10 Coding Principles Every Programmer Should Learn

There are two general ways to reuse the code you have already written, Inheritance and Composition; both have their own advantage and disadvantages, but, in general, you should always favor composition over inheritance, if possible. Composition allows changing the behavior of a class at run-time by setting property during run-time, and by using Interfaces to compose a class, we use polymorphism, which provides flexibility to replace with better implementation at any time. Even Joshua Bloch’s Effective Java advise favoring composition over inheritance. If you are still not convinced, then you can also read here to learn more about why your Composition is better than Inheritance for reusing code and functionality. And, if you keep forgetting this rule, here is an excellent cartoon to put in your desk :-) If you are interested in learning more about Object-Oriented Programming Concepts like Composition, Inheritance, Association, Aggregation, etc., you can also take a look at the Object-Oriented Programming in Java course on Coursera.



Extensible Effects in JavaScript for Fun and Profit

Extensible Effects, broadly speaking, is the idea that you can separate the 'what' and 'how' in your code. By representing effects as 'tokens' that hold no intrinsic implementation details, you can write programs that are completely unaware of how they'll eventually interact with their environment. Later on these effects can be 'interpreted' by converting each token into specific actions of your choice. These effects could be general, such as 'send network request', or domain specific, like 'log user out' - it's up to you. For those unfamiliar with monads, you can think of this technique as dependency injection for your software's API calls. You program to an interface, and can provide a different implementation depending on the situation. Extensible effects are implemented via a Freer monad. This is a nested data structure of an initial effect or value, and a sequence of functions that convert the results of a previous effect into the next. When applied to an interpreter function that converts effects into the target monad of your choice, it unwraps from the 'inside out' - the first effect is converted into the target monad, which is mapped into the next effect-containing Freer monad.


Microservices: A cheat sheet

Comparisons are frequently made between microservices and service-oriented architecture (SOA). While the two may seem similar at first glance, they're nearly completely different except in the most basic ways. Both SOA and microservices involve the creation of small components that communicate data to other services, but the scope, purpose, and how the communication occurs are completely different. For starters, SOA is an enterprise-wide architecture, whereas microservice architecture is a way to build a single application. The idea behind SOA is to create a common framework for communication that allows applications, data sources, and other network-connected elements to communicate in a platform-agnostic manner.  SOA wants communication between elements to happen fast, smooth, and without barriers; this is a radical difference from microservices, which want independent elements that aren't dependent on each other at all. SOA integrations are reused constantly—that's the goal of SOA, according to IBM. In the case of microservices, reuse is completely undesirable--if a component is being called in more than one place by its main application, agility and resilience will suffer.


The Four Data Management Mistakes Derailing Your BI Program

There are a number of ways this can happen to a company. When folks come to us looking for a reporting solution to meet their customers’ needs (such as a BI solution designed to be embedded into SaaS applications), they’re not setting up the database in the same step. They’ve already been collecting data for a long time — long before reporting was even a consideration, in most cases. Sometimes we discover that the person who initially set up the database doesn’t even work at the organization anymore and didn’t leave much in the way of documentation or tribal knowledge to help onboard a successor. Other times, responsibility for (and knowledge of) the data is distributed throughout the company. One group might have a deep understanding of the data’s semantics while another, such as IT, might have some insight into its maintenance and traffic capacity. A third group responsible for data analysis might be most familiar with its utility to stakeholders. Unfortunately, none of these groups have a grasp of the database’s structure or complete knowledge of the data itself.


DataOps: The Path to AI-Readiness

Every business has a unique vision or goal for AI, whether it’s improving predictions, automating mundane tasks, freeing up employees to do more fulfilling work, or optimizing processes. But in many cases, there’s no better purpose for AI than in understanding your environment, what your systems are saying through their data, and discovering issues before they snowball into full blown outages. Organizations use about $26.5 billion in revenue because of IT system outages. IBM’s Watson AIOps understands the systems, normal system behaviors, and acceptable ranges, and provides alerts when a problem arises. In effect, it’s a nervous system that allows CIOs to effectively manage all of their systems. Given that data scientists lament limited data access and the lack of a line of sight between data and all team members, a solution such as this becomes a facilitator for faster, proactive responsiveness. ... AI-enabled automation is integral to DataOps for more than just manual steps; for governance processes, data curation, metadata assignment, and ensuring data is available for self-service. This helps to operationalize consistent high quality data throughout the entire enterprise.



Quote for the day:

"The secret of a leader lies in the tests he has faced over the whole course of his life and the habit of action he develops in meeting those tests." -- Gail Sheehy

Daily Tech Digest - May 31, 2020

The Future Of Fintech: The New Normal After The Covid-19 Crisis

For banks, the new normal marks the end of fintech experimentation. Over the past few years, banks have been obsessed with fintech partnerships. It’s been a way of convincing themselves (and their boards) that they’re innovating and not getting left behind as the industry undergoes a digital transformation. Too many of these efforts, however, have had little impact on the strategic direction, organizational culture, and bottom line results of the institution. According to Louise Beaumont: “For banks, partnerships won’t generate the quantum leap they need to move beyond a product-centric mentality to deliver next-generation services. At best, they may gain a workable solution that squats awkwardly in the existing infrastructure. At worst, they’ll fail to deliver any noticeable difference.” Many so-called partnerships—many of which aren’t partnerships, but just vendor arrangements—are examples of what Jason Henrichs of Fintech Forge likes to call the “fintech petting zoo.” The luxury of experimenting with fintech is gone. Banks will need to accelerate their investments in fintech to achieve both the top line increases and expense reductions needed to maintain margins and profitability.


ACLU sues Clearview AI claiming the company's tech crosses ethical bounds

The ACLU alleges that by using face recognition technology, Clearview has captured more than 3 billion faceprints from images available online, all without the knowledge or consent of those pictured. "Clearview claims that, through this enormous database, it can instantaneously identify the subject of a photograph with unprecedented accuracy, enabling covert and remote surveillance of Americans on a massive scale," it said. "This technology is so dangerous, in fact, that this little-known startup 'might end privacy as we know it'." The ACLU said that Clearview has "created the nightmare scenario that we've long feared, and has crossed the ethical bounds that many companies have refused to even attempt" and accused the company of building a mass database of billions of faceprints without knowledge or consent. "Neither the United States government nor any American company is known to have ever compiled such a massive trove of biometrics," it wrote. "Adding fuel to the fire, Clearview sells access to a smartphone app that allows its customers -- and even those using the app on a trial basis -- to upload a photo of an unknown person and instantaneously receive a set of matching photos."


GoodData and Visa: A common data-driven future? 

One of the initiatives GoodData is taking to help organizations go from dashboards to data-driven application is the Accelerator Toolkit. The Accelerator Toolkit is a UI library to enable customized and faster data analytics, along with educational resources. Stanek mentioned that GoodData plans to launch a GoodData University initiative soon, to offer more resources to empower organizations. Another noteworthy development for GoodData is the evolution of its Semantic Layer data model. A new modeling tool by GoodData aims to improve collaboration between engineers and analysts to streamline the start process for enterprise data products. Stanek initially referred to this as an attempt to establish a single version of the truth. This, however, has always been an elusive goal. While improving collaboration between engineers and analysts is commendable, more pragmatically, organizations can aim to establish shared data models among user groups, rather than global ones. Stanek did not sound short of ambition, and our conversation touched upon a number of topics. If you want to listen to it in its entirety, make sure to subscribe to the Orchestrate all the Things podcast, where it will be released soon.


Building the foundation for a strong fintech ecosystem in Saudi Arabia

Prior to Co-VID 19 and its sudden need for global digitalisation, there was already potential for Saudi Arabia to have a strong fintech network. It is the largest economy in the region, where its stock market is worth around $549 billion USD, contributing to over half of the region’s total gross domestic product (GDP) in 2018, and is a member of the Group of Twenty (G20); this year it is actually Saudi Arabia that holds the G20 presidency. Also, Saudi has a very young population, where 70 percent of the population in 2017 was under 30 years old. It is also a very tech savvy nation, where it ranks, according to a report by EY, as having the third highest smartphone mobile usage globally and the seventh globally in terms of household internet access. This, coupled with the ongoing economic initiatives and investments as part of Saudi Vision 2030, has put Saudi’s fintech prospects and future growth at the forefront. ... Saudi Arabia has an opportunity to further solidify its position to one day be a leader in fintech. It has already, as part of Vision 2030, set the foundation to create an environment that not only is attracting foreign investment but also providing the tools and guidance to create its own talent and innovation as well.


Why Blockchain Needs Kubernetes

Kubernetes and Docker can, and have, abstracted away much of the knowledge required to get started. IBM and Corda have containerized their blockchain protocols and various Ethereum images exist - for added granularity, network component images exist as well, including the Solidity compiler, network stats dashboard, testnets, miner nodes, block explorers, etc. In time, I expect to see more and more component network parts containerized and made available. Deploying blockchains will be a matter of picking a protocol image and the additional components images, building YAML manifests, and deploying with helm install. While modularity is necessary for designing complex networks and is available for those that need it, the choice overload can and will deter adoption for those that do not have the expertise, time, patience, or resources to explore blockchain technology. By packaging up elements of blockchain networks into image files that can be deployed and managed, the requisite knowledge required to get started will be democratized to those that are familiar with Docker and Kubernetes.


COVID-19 is teaching investors a thing or two about how important an opportunity “edtech” is

In spite of the billions invested across the world in the latest and greatest innovations, technology hasn’t been able to stop or impact the spread of COVID-19 on any notable scale, something embarrassing to us all. As a result, investors broadly have decided to support the industries and tech where significantly less funding had been placed historically. As an example, we at Perlego have received five times more approaches from new venture capitalists (VCs) and angels since the lockdown. I believe this is for one of two reasons: they either want to help a future society or they’ve seen failures in the likes of medicine, education and ecotech at this time and see these as the new fintechs in the years to come. Regardless of the reason, what is essential is to place more focus on the sectors that were previously seen as poor relations to their shiny counterparts. Investment, growth and the opportunity to succeed must be further developed; such is the necessity for innovation on a global scale. It’s sad that it has taken a global crisis to trigger this thinking.


Optimizing MDM With Agile Data Governance

The embarrassing truth is that most organizations cannot answer these seemingly simple questions, at least without serious effort. In addition, many organizations have been reporting erroneous customer figures as different silos and lines of business fail to work cohesively to manage their master data assets. The annual cost and impact of data quality issues that are rooted in ungoverned data with little or no formal accountabilities around critical enterprise data have propelled the need for many organizations to fix their MDM problem. It’s evident that the need for ‘trusted data’ continues to appear in nearly all data initiatives. However, most organizations are still struggling with their MDM rollout simply because it’s addressed from a one lens angle. It’s one thing to fix the problem by mastering the formerly bad data; it’s another thing to make the solution sustainable by treating the root problem of disparate common data. The value of a ‘Stewardship culture‘ around data assets cannot be overemphasized. For MDM to be sustainable and rightfully implemented, it must be positioned in a governed environment where stewardship around the mastered data, and the associated culture of data governance are implemented.


Unify: Architecting the Missing Link in Data Management

No matter what label or acronym the industry attributes, it comes down to a simple truth that you need a dose of reality before tackling data management. “All recognize the fact that it is impossible for organizations to physically centralize all their data. Instead, data virtualization lets organizations provide one “virtual” place to go for data consumers to access data and IT to provide it,” says Eve.  Next, companies need to have a strategy to tool up for “next-generation data management.” “Gartner’s advice to consolidate with their data management tooling in vendor suites such as TIBCO Unify that combine metadata management, master data management, reference data management, data catalog, data governance, and data virtualization within one integrated solution,” says Eve. Data management should not be an IT problem alone. Businesses can chip in by increasing their citizen data engineering pool and offering business domain advice. “Work together to assess your needs and skills. Then be smart about maximizing the value each side can contribute, for example, IT using TIBCO Data Virtualization to provision hundreds of reusable data services that the business can quickly mix and match to address their changing needs,” says Eve.


ZLoader Banking Malware Resurfaces

Zloader has an element that downloads and runs the banking malware component from its command-and-control server, researchers at Proofpoint say. ZLoader spread in the wild from June 2016 to February 2018, with a group called TA511 - aka MAN1 or Moskalvzapoe - being one of the top threat actors spreading the malware, the report adds. The ZLoader malware uses webinjects to steal credentials, passwords and cookies stores in web browsers, and other sensitive information from customers of banks and financial institutions, according to Proofpoint. The malware then lets hackers connect to the infected system through a virtual network computing client, so they can make fraudulent transactions from the users device. The researchers note that the latest variant seemed to be missing some of the advanced features of the original ZLoader malware, such as code obfuscation and string encryption, among other features. "Hence, the new malware does not appear to be a continuation of the 2018 strain, but likely a fork of an earlier version," the researchers state.


Opening the doors to greater data value with data catalogue

If data isn’t consistent, comprehensive, and accurate, digital transformation efforts may fall short of objectives in a wide range of areas, such as: Laying the foundations for advanced analytics. Data scientists often spend 80% of their time searching for data, and just 20% on actual AI/ML and modeling. A data catalogue reverses the equation by providing quick data discoverability and access to relevant information. That lets data scientists and business analysts use trusted data to deliver the insights needed for data-driven decision-making. Developing a 360 degree customer experience. Because customer data exists in so many corners of the enterprise, it’s essential for organisations to have a holistic 360-degree view across all sources if they are to truly understand customers as individuals. By identifying all key sources of customer data, a data catalogue provides the foundation for more personalised engagement and improved customer experience. Supporting and accelerating smooth cloud data migration. 



Quote for the day:

"Develop success from failures. discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - May 30, 2020

Tips on Digital Adoption and Transformation from Tesla

Pushback by people resistant to change regardless of the potential value of the ideas or technology can stall adoption, Davies said. The friction can come from a refusal to part ways with familiar, comfortable methods. Such reluctance may be reinforced by a lack of awareness of features and functionality, she said, found in the innovations being introduced. To get teams on board with change, Davies said it is essential to show them the new technology is better than what they already use. “This was Tesla’s strategy when introducing a huge transformation with electric cars,” she said. The same strategy used to get consumers to consider migrating from internal combustion engines to electric vehicles can be applied, Davies said. Common presumptions about electric cars painted them as slow, ugly, and limited on range. When Tesla unveiled the first Roadster, the company promoted its visual aesthetics and performance, she said, capable of accelerating from 0-60mph in 3.7 seconds and with an operational range of 245miles on a full charge, “This car was a critical step in the digital transformation to electric cars,” Davies said.


Please, Keep Artificial Intelligence From Becoming Another Out-Of-Touch Bureaucracy

AI inherently operates just like bureaucracies, he adds. “The essence of bureaucracy is to favor rules and procedures over human judgment. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy — I call it ‘algocracy,’ where AI will take more and more critical decisions by the rules outside of any human control.” The results of bureaucratic algocracy could be devastating — affecting university admissions, aircraft performance, or supply chain issues when a crisis hits. That’s why there needs to be humans providing input into AI decisions. It should be added that it takes humans to design forward-thinking processes and companies — tools such as AI are only that — tools that will help make things happen. As with many technology innovations, it often gets assumed that by dropping AI into a moribund, calcified organisation, insights and profitability will magically clear things up. AI should serve as “augmented” intelligence to support human decision-making — not the other way around.


Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work

In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. The same person then grabs two gallons of milk by their handles, and moves them across the scanner with one hand. Only one is rung up, but both are put in the bagging area. They then put their own cell phone on top of the machine, and an alert pops up saying they need to wait for assistance—a false positive. “Everseen finally alerts! But does so mistakenly. Oops again,” a caption reads. The filmmaker repeats the same process at two more stores, where they fail to scan a heart-shaped Valentine’s Day chocolate box with a puppy on the front and a Philips Sonicare electric toothbrush. At the end, a caption explains that Everseen failed to stop more than $100 of would-be theft.


How AI is transforming recruitment and hiring

Traditionally the recruiter is the person who sources the resume from various sources. This is a time-consuming process, and is also prone to human errors. There are many tools available today that can match your job description to resumes on recruitment portals and help you build a database of the most relevant candidates. These AI-powered tools use pattern matching algorithms to make sure the resume is a close match to that of the job description. They typically use AI and pattern matching algorithms to match resumes to job descriptions.  For example, if the recruiter is looking for a marketing professional with 3-5 years of experience with a salary of Rs 12, 00,000 per annum who stays within 10 km of their office, the standard search may throw up 30 candidates. However, using AI, the software is capable of suggesting that if the experience desired is increased to 6 years and the salary to Rs 15, 00,000, there would be 50 candidates ideal for the profile. This data is useful to recruiters who need to understand where to get the maximum best-suited candidates from. This results in the elimination of manual efforts and a significant reduction in the number of unsuitable candidates, thus improving the process.


Singapore researchers tapping quantum cryptography to enhance network encryption

ST Engineering's president of cybersecurity systems group, Goh Eng Choon, said: "The threat landscape is evolving very rapidly and we must be prepared for challenges to come in the post-quantum computing era. While QKD technology can be used to secure digital communications, it can also be used to mitigate future quantum computers being used to exploit and maliciously target weak links and disrupt the global encryption ecosystem. "This research into quantum cryptography and the co-development of the industry's first solution will allow us to explore the potential of this technology, further strengthen our arsenal of advanced cybersecurity solutions, and gain a foothold in the QKD market," Goh said. NUS currently is working with nanoelectronics institute companies to jointly develop new chip-based quantum crypto devices, which can be applied to the new MDI-QKD technology and broader quantum cryptography technology due to their smaller device footprint and lower cost. NUS' assistant professor Charles Lim Ci Wen, who leads the joint project with ST Engineering, said: "As quantum computing becomes more prevalent worldwide, information security threats will also become more advanced.


Leaders discuss challenges, strategies for women in IT

"Things are only going to change over time, the more we continue to support and promote diversity, diverse teams and allowing different perspectives to prevail and not always sticking with the same old thing that works," Mayshar said. Constellation Research's Miller agreed, explaining that the push for women in the tech industry is not just to get them in there, but also to keep them in there and create visibility for the next generations. "I think that more girls are going to see women in leadership positions in technology companies -- they're going to see more women founders, they're going to see more women CEOs," Miller said. As for the new reality that is remote work in most businesses due to the COVID-19 pandemic, Ray-Pope said it could actually benefit women in IT. "There are ways I am also maximizing my time because of being home," she said. "I do think that we in corporate are in a unique position. I do think, in some ways, I have seen the playing field leveled … [working from home] is opening doors for women who choose to take advantage of it. There is no backroom networking." Juggling between home and work lifestyles isn't a new challenge for women, Miller said.


Why authentic informal leaders are key to an organization’s emotional health

AILs have excellent emotional sensing and energizing capabilities. They naturally detect feelings at play in any organizational challenge, capture and create positive emotions, and know how to influence and encourage people to engage in important behaviors. Management can mobilize them as a powerful resource to learn and identify how to respond in moments of crises. When appropriate, they can also counterbalance negative feelings. ... AILs can be engaged and activated in a variety of ways — many of them virtual. For example, emotionally intelligent AILs may be asked to launch an effort to understand and find ways to manage the organization’s fear and bolster individual confidence with respect to COVID-19. They can launch virtual small communities that meet regularly to discuss how they are motivating their teams. They can organize a Facebook group or another group to discuss topics informally. Rather than mandate that AILS act in a specific way, formal leaders should ask AILS how best to engage and activate them. 


Secure Together: is your organisation prepared for the end of lockdown?

Many have found the sudden shift to remote working so smooth that they are happy to make it a permanent move. A Gartner study revealed that 41% of employees want to continue working remotely some or all of the time after lockdown – up from 30% before the pandemic. However, while there are clearly benefits to remote working, things won’t be the same when offices reopen and there will be new challenges to address. For a start, working from home in a post-lockdown scenario might feel even more isolating, as you and your colleagues are no longer bound by being ‘in it together’. Those who return to the office are likely to resume normal work practices and may not be able to make the time to socialise with remote workers. Likewise, technical difficulties will probably take longer to resolve, as the IT team will once again be prioritising the systems and employees on the premises. These are issues that organisations and their employees should consider carefully as we ease our way out of lockdown.


Debunking The Myth That Greater Compliance Makes IT More Secure

Excelling at compliance doesn't protect any business from being hacked, yet pursuing a continuous risk management strategy helps. With a few exceptions (such as spearphishing), cyberattacks are, by nature, brutally opportunistic and random. They are driven to disrupt operations at best and steal funds, records, and privileged access credentials at worst. Conversely, the most important compliance event of all, audits, are planned for, often months in advance. Governance, Risk, and Compliance (GRC) teams go to Herculean efforts to meet and exceed audit prep timelines working evenings and weekends. ... The truth is organizations are attempting to rationalize the high costs of compliance by looking for how GRC spend can also improve cybersecurity. This is a dangerous assumption, as Marriott's third breach indicates. Marriott is an excellently managed business and sets standards in compliance. Unfortunately, that hasn't thwarted three breaches they've experienced. Why are organizations assuming GRC spending will improve cybersecurity? It's because both areas share a common series of pains that require different solutions


The Android hardware truth Google won't tell you

Plain and simple, buying an Android tablet is setting yourself up for disappointment — when it comes to both performance and capability and when it comes to the critical areas of privacy, security, and ongoing software upkeep. So when people ask me which Android tablet they should buy, you know what I tell 'em nowadays? They shouldn't buy one at all. If they want a Googley, Android-connected experience in a large-screen form, they should consider a decent convertible Chromebook instead. The exception — and Chrome OS's remaining weakness — is in the area of super-affordable, small-slate tablets. You can get a crappy Amazon-made Fire tablet for 50 bucks! And Chromebooks have yet to come around to address that demand. So if you're looking for a dirt cheap video screen or, say, something for a child to use, the low-end Android tablets might still be your only real option. When it comes to productivity and actual work-friendly devices, though — situations where the computing experience itself matters and where having an optimally secure, privacy-conscious, and performance-optimized environment is important — the common advice out there is increasingly misguided.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - May 29, 2020

Cases dealt with by AI courts rely heavily on blockchain evidence. For the uninitiated, blockchain is literally a chain of digital blocks. It is the system of storing digital information (the block) in a public database (the chain). Blockchain preserves information about transactions like the date, time and purchase amount etc. A classic illustration would be a purchase on Amazon. It contains a series of transactions which are recorded and kept on a digital platform. Each ‘block’ added to the ‘chain’ comes into the public domain, where it remains preserved. The critical question is, is blockchain tamper-proof? Is alteration of its data impossible by human intervention? Is blockchain data immutable and time-stamped, and can it safely be used as an auditable trail? The judges in China think so. China’s Supreme People’s Court has put matters to rest. It has ruled that evidence authenticated with blockchain technology is binding in legal disputes. It ruled, "...internet courts shall recognize digital data that are submitted as evidence if relevant parties collected and stored these data via blockchain with digital signatures, reliable timestamps and hash value verification or via a digital deposition platform, and can prove the authenticity of such technology used."


GitHub Supply Chain Attack Uses Octopus Scanner Malware

When Octopus Scanner lands on a machine, it looks for signs indicating the NetBeans IDE is in use on a developer's system, GitHub security researcher Alvaro Muñoz explains in a blog post on their findings. If it doesn't find anything, the malware takes no action. If it does, it ensures that every time a project is built, any resulting JAR files are infected with a dropper. When executed, the payload ensures persistence and spreads a remote access Trojan (RAT), which connects to C2 servers. The malware continues to spread by infecting NetBeans projects, or JAR files. This way, it backdoors healthy projects so when developers release code to the public, it contains malware. The goal of Octopus Scanner is to insert backdoors into artifacts built by NetBeans so the attacker can use these resources as part of the C2 server, Waisman says. "When the end user deploys the workload, they have given the attacker access via the backdoor to their resources for use as part of a command-and-control server," he adds. 


How the coronavirus pandemic is affecting developers' mental health

Working from home has always included controversy. While two-thirds of employees prefer to do so--more than a third would choose this perk over a pay raise and another 37% would take a 10% pay cut to stay home--management has traditionally been less than thrilled with the idea. It's often been viewed by executives as a way for workers to underperform in their roles or fly under the radar. As a result, given that many organizations now have no choice but to promote work-from-home capabilities, these are being doled out with increased expectations and heftier accountability requirements. The economic downturn and threat of looming layoffs don't help the situation. I can say I've put in more hours than ever before proving my value in my role to ensure that the systems and services for which I am responsible stay up and running. ... Without commutes it can seem like there are more hours in the day, but at the same time there aren't clear breaks between home and work time, nor the regular breaks for mentally recharging like going out for coffee or even just visiting the snack area and talking to coworkers.


Create Deepfakes in 5 Minutes with First Order Model Method

The basis of deepfakes, or image animation in general, is to combine the appearance extracted from a source image with motion patterns derived from a driving video. For these purposes deepfakes use deep learning, where their name comes from (deep learning + fake). To be more precise, they are created using the combination of autoencoders and GANs. Autoencoder is a simple neural network, that utilizes unsupervised learning (or self-supervised if we want to be more accurate). They are called like that because they automatically encode information and usually are used for dimensionality reduction. It consists of three parts: encoder, code, and decoder. The encoder will process the input, in our case input video frame, and encode it. This means that it transforms information gathered from it into some lower-dimensional latent space – the code. In this latent representation information about key features like facial features and body posture of the video-frame is contained. In lame terms, here we have information about what face is doing, does it smile or blinks, etc. 


Mobile security forces difficult questions

When it comes to security, compliance and what IT or Security have the right to do, neither is demonstrably better, unless you're willing to put rights and restrictions in writing and — this is the hard part — enforce them. The biggest worry for either modes involves remote wipe. When a device is suspected to have been stolen, remote wipe needs to happen, to reduce the chance of enterprise data being stolen or an attack being waged. That question becomes difficult when the device is owned by the employee. Does the enterprise have the right to wipe it and permanently delete any personal data, images, messages, videos, etc.? We'll get back to BYOD deletions in a moment. But for corporate devices, the deletion would seem to be much easier. And yet, it's not. Many companies encourage employees to not use the corporate mobile device for anything other than work, but few put it in writing and stress that the company may have to obliterate everything on the phone in the case of a perceived security emergency — and insist that it be signed before the phone is distributed.


Digital Distancing with Microsegmentation

Microsegmentation improves data center security by controlling the network traffic into and out of a network connection. Ultimately, the goal of microsegmentation is to implement Zero Trust. Done properly, microsegmentation is effectively a whitelist for network traffic. This means that systems on any given network can strictly communicate with the specific systems they need to communicate with, in the manner they are supposed to communicate, and nothing else. With connections and communications so regimented, microsegmentation is among the best protections we have today against lateral compromise. This allows microsegmentation administrators to protect whatever is on the other end of that network connection from whatever else is on the network. It also allows everything else on the network to receive a basic level of protection from whatever might be on the other end of that network connection. This is a huge change from the "eggshell computing" model in which all defenses are concentrated at the perimeter (the eggshell) but everything behind that edge is wide open (the soft insides of the egg). 


Cisco Throws Its Weight Behind SASE

SASE represents an opportunity to put more of Cisco’s networking and security services in the cloud, said Jeff Reed, SVP of product for Cisco’s security business group. Cisco’s SASE offering will tie together elements of its networking, security, and zero-trust product lines. This includes elements of its Viptela and Meraki SD-WAN platforms to address SASE’s WAN and routing requirements. Meanwhile, for security, the vendor will lean on Cisco Umbrella for secure web gateway, domain name system (DNS), firewall, and cloud access security broker (CASB) functionality. Finally, Cisco will integrate core elements of its zero-trust networking portfolio — which includes Duo, SD-Access, and AnyConnect — to verify identity and enhance the overall security of the offering. “We had this opportunity … to basically tie all the strength we have on the network side into the abilities and capabilities we have on the security side,” Reed said. But Reed emphasizes that Cisco won’t be “lifting and shifting” existing constructs and running them in the cloud. Cisco is fully embracing the cloud-native underpinnings of SASE, he said. “We’re doing cloud native, so we’re not just lifting and shifting our virtual firewall in the cloud.”


Compare a product vs. project mindset for software development

Enterprises have started to recognize the danger of a project mindset, namely, that everyone focuses less on the product. "A perfect project management system can complete every task ... in a vacuum, with amazing results -- and still fail when it comes time to go to market," said Alexander M. Kehoe, operations director at Caveni Digital Solutions, a web design consultancy. Apple has applied both project and product mindsets. Apple's iPhone innovation enabled it to grow into one of the largest companies in the world. However, critics accuse Apple of releasing a nearly carbon-copy iPhone each year. According to these critics, product quality for these phones has stagnated, as Apple finishes projects with little or no consideration on the product side. Because of this reliance on project-oriented thinking, Kehoe said, the next major mobile phone innovation might not come from Apple. If another company takes the lead in mobile phone innovation, Apple might see its market dominance evaporate overnight, he said.



Report: Debugging Efforts Cost Companies $61B Annually


The report also notes software engineers spend on average of 13 hours to fix a single software failure. According to the report, 41% said identified reproducing a bug as the biggest barrier to finding and fixing bugs faster, followed by wiring tests (23%) and actually fixing the bug (23%). Well over half (56%) said they could release software one to two days faster if reproducing failures were not an issue. Just over a quarter of developer time (26%) is spent reproducing and fixing failing tests. On the plus side, 88% of respondents said their organizations have adopted continuous integration (CI) practices, with more than 50% of businesses reporting they can deploy new code changes and updates at least daily. Over a third (35%) said they can make hourly deployments. Undo CEO Barry Morris said the report makes it clear organizations need to be able to record software to reduce the amount of time it takes to find bugs. Unfortunately, even then finding a bug is still a labor-intensive process that can involve analyzing millions of lines of code.


Using Cloud AI for Sentiment Analysis

Natural Language Toolkit (NLTK) is a powerful Python library for natural language processing (NLP) and machine learning. Popular cloud services offer some alternative NLP tools that use the same underlying concepts as NLTK. ... If you've followed through the NLP sentiment analysis articles we started in Introducing NLTK for Natural Language Processing, you've seen one established approach. The following overviews will show you what the interface and response look like for sentiment analysis on these cloud services. In many cases it's very similar to NLTK, just using the horsepower of someone else's computers. Amazon Web Services (AWS) provides an Amazon Comprehend NLP service that includes a range of features analogous to some of what you’ll find in NLTK. Similar to NLTK’s pos_tag, the AWS service can identify parts of speech (POS) and tag them as proper names, places, and locations, and so forth. It has support for 100 languages that can be detected in unstructured text, and includes text summarization capabilities to identify and extract key phrases that contribute to the overall meaning of a given piece of text.



Quote for the day:

"If you're not prepared to be wrong, you'll never come up with anything original." -- Sir Ken Robinson