Daily Tech Digest - December 28, 2022

The 5-step plan for better Fraud and Risk management in the payments industry

The overall complexity and size of the digital payments industry make it extremely difficult to detect fraud. In this context, merchants and payment companies can introduce fraud monitoring and anti-fraud mechanisms that verify every transaction in real-time. The AI-based systems can take into account different aspects such as suspicious transactions, for example, amount, unique bank card token, user’s digital fingerprint, the IP address of the payer, etc., to evaluate the authenticity. Today, OTPs are synonymous with two-factor authentication and are thought to augment existing passwords with an extra layer of security. Yet, fraudsters manage to circumvent it every day. With Out-of-Band Authentication solutions in combination with real-time Fraud Risk management solutions, the service provider can choose one of many multi-factor authentication options available during adaptive authentication, depending on their preference and risk profile Just like 3D Secure, this is another internationally-accepted compliance mechanism that ensures that all the intermediaries involved in the payments system must take special care of the sensitive client information. 


The Importance of Pipeline Quality Gates and How to Implement Them

There is no doubt that CI/CD pipelines have become a vital part of the modern development ecosystem that allows teams to get fast feedback on the quality of the code before it gets deployed. At least that is the idea in principle. The sad truth is that too often companies fail to fully utilize the fantastic opportunity that a CI/CD pipeline offers in being able to provide rapid test feedback and good quality control by failing to implement effective quality gates into their respective pipelines. A quality gate is an enforced measure built into your pipeline that the software needs to meet before it can proceed to the next step. This measure enforces certain rules and best practices that the code needs to adhere to prevent poor quality from creeping into the code. It can also drive the adoption of test automation, as it requires testing to be executed in an automated manner across the pipeline. This has a knock-on effect of reducing the need for manual regression testing in the development cycle driving rapid delivery across the project.


Best of 2022: Measuring Technical Debt

Of the different forms of technical debt, security and organizational debt are the ones most often overlooked and excluded in the definition. These are also the ones that often have the largest impact. It is important to recognize that security vulnerabilities that remain unmitigated are technical debt just as much as unfixed software defects. The question becomes more interesting when we look at emerging vulnerabilities or low-priority vulnerabilities. While most will agree that known, unaddressed vulnerabilities are a type of technical debt, it is questionable if a newly discovered vulnerability is also technical debt. The key here is whether the security risk needs to be addressed and, for that answer, we can look at an organization’s service level agreements (SLAs) for vulnerability management. If an organization sets an SLA that requires all high-level vulnerabilities be addressed within one day, then we can say that high vulnerabilities older than that day are debt. This is not to say that vulnerabilities that do not exceed the SLA do not need to be addressed; only that vulnerabilities within the SLA represent new work and only become debt when they have exceeded the SLA.


DevOps Trends for Developers in 2023

Security automation is the concept of automating security processes and tasks to ensure that your applications and systems remain secure and free from malicious threats. In the context of CI/CD, security automation ensures that your code is tested for vulnerabilities and other security issues before it gets deployed to production. In addition, by deploying security automation in your CI/CD pipeline, you can ensure that only code that has passed all security checks is released to the public/customers. This helps to reduce the risk of vulnerabilities and other security issues in your applications and systems. The goal of security automation in CI/CD is to create a secure pipeline that allows you to quickly and efficiently deploy code without compromising security. Since manual testing might take a lot of time and developers' time, many organizations are integrating security automation in their CI/CD pipeline today. ... Also, the introduction of AI/ML in the software development lifecycle (SDLC) is getting attention as the models are trained to detect irregularities in the code and give suggestions to enhance or rewrite it.


What Brands Get Wrong About Customer Authentication

When comparing friction for customers with security accounts and practical security needs, one of the main challenges is convincing the revenue side of a business of the need for best practice from a security standpoint. Cybersecurity teams must demonstrate that the financial risks of not putting security in place - i.e., fraud, account takeover, reputation loss, regulatory fines, lawsuits, etc. - overwhelm the loss of revenue and abandonment of transactions on the other side. There are always costs associated with security systems, but comparing the costs associated with fraud to those of implementing new security measures will justify the purchase. There is a fine balance between having effective security and operating a business. Customers quickly become frustrated by jumping through hoops to log in, and the password route is unsustainable. It’s time to look at the relationship between security and authentication and develop solutions for both. Taking authentication to the next level requires thinking outside the box. If you want to implement an authentication strategy that doesn’t drive away customers, you need to make customer experience the focal point.


Video games and robots want to teach us a surprising lesson. We just have to listen

The speedy, colorful ghosts zooming their way around the maze greeted me as I stared at the screen of a Pac-Man machine, a part of the 'Never Alone: Video Games and Other Interactive Design' exhibit of the Museum of Modern Art in New York City. Using the tiniest amount of RAM and code, each ghost is programmed with its own specific behaviors, which combine to create the masterpiece work, according to Paul Galloway, collection specialist for the Architecture and Design Department. This was the first time I'd seen video games inside a museum, and I had come to this exhibit to see if I could glean some insight into technology through the lens of art. It's an exhibit that is more timely now more than ever, as technology has been absorbed into nearly every facet of our lives both at work and at home -- and what I learnt is that our empathy with technology is leading to new kinds of relationships between ourselves and our robot friends. ... According to Galloway, the Never Alone exhibit is linked to an Iñupiaq video game included in the exhibit called Never Alone (Kisima Ingitchuna). 


The increasing impact of ransomware on operational technology

To protect against initial intrusion of networks, organisations must consistently find and remediate key vulnerabilities and known exploits, while monitoring the network for attack attempts. Also, wherever possible equipment should be kept up-to-date. VPNs in particular need close attention from cyber security personnel; new VPN keys and certificates must be created, with logging of activity over VPNs being enabled. Access to OT environments via VPNs calls for architecture reviews, multi-factor authentication (MFA) and jump hosts. In addition, users should read emails in plain text only, as opposed to rendering HTML, and disable Microsoft Office macros. For network access attempts from threat actors, organisations should perform an architecture review for routing protocols involving OT, and monitor for the use of open source tools. MFA should be implemented to access OT systems, and intelligence sources utilised for threat and communication identification and tracking.


The security risks of Robotic Process Automation and what you can do about it

RPA credentials are often shared so they can be used repeatedly. Because these accounts and credentials are left unchanged and unsecured, a cyber attacker can steal them, use them to elevate privileges, and move laterally to gain access to critical systems, applications, and data. In addition, users with administrator privileges can retrieve credentials stored in locations that are not secured. As many enterprises leveraging RPA have numerous bots in production at any given time, the potential risk is very high. Securing the privileged credentials utilised by this emerging digital workforce is an essential step in securing RPA workflows. ... The explosion in identities is putting more pressure on security teams since it leads to the creation of more vulnerabilities. The management of machine identities, in particular, poses the biggest problem, given that they can be generated quickly without consideration for security protocols. Further, while credentials used by humans often come with organisational policy that mandates regular updates, those used by robots remain unchanged and unmanaged. 


Best of 2022: Using Event-Driven Architecture With Microservices

Most existing systems live on-premises, while microservices live in private and public clouds so the ability for data to transit the often unstable and unpredictable world of wide area networks (WANs) is tricky and time-consuming. There are mismatches everywhere: updates to legacy systems are slow, but microservices need to be fast and agile. Legacy systems use old communication mediums, but microservices use modern open protocols and APIs. Legacy systems are nearly always on-premise and at best use virtualization, but microservices rely on clouds and IaaS abstraction. The case becomes clear – organizations need an event-driven architecture to link all these legacy systems versus microservices mismatches. ... Orchestration is a good description – composers create scores containing sheets of music that will be played by musicians with differing instruments. Each score and its musician are like a microservice. In a complex symphony with a hundred musicians playing a wide range of instruments – like any enterprise with complex applications – far more orchestration is required.


Scope 3 is coming: CIOs take note

Many companies in Europe have built teams to address IT sustainability and have appointed directors to lead the effort. Gülay Stelzmüllner, CIO of Allianz Technology, recently hired Rainer Karcher as head of IT sustainability. “My job is to automate the whole process as much as possible,” says Karcher, who was previously director of IT sustainability at Siemens. “This includes getting source data directly from suppliers and feeding that into data cubes and data meshes that go into the reporting system on the front end. Because it’s hard to get independent and science-based measurements from IT suppliers, we started working with external partners and startups who can make an estimate for us. So if I can’t get carbon emissions data directly from a cloud provider, I take my invoices containing consumption data, and then take the location of the data center and the kinds of equipment used. I put all that information to a rest API provided by a Berlin-based company, and using a transparent algorithm, they give me carbon emissions per service.” Internally speaking, the head of IT sustainability role has become more common in Europe—and some of the more forward-thinking US CIOs are starting to see the need in their own organizations.



Quote for the day:

"The only way to follow your path is to take the lead." -- Joe Peterson

Daily Tech Digest - December 27, 2022

Prepping for 2023: What’s Ahead for Frontend Developers

WebAssembly will work alongside JavaScript, not replace it, Gardner said. If you don’t know one of the languages used by WebAssembly — which acts as a compiler — Rust might be a good one to learn because it’s new and Gardner said it’s gaining the most traction. Another route to explore: Blending JavaScript with WebAssembly. “Rust to WebAssembly is one of the most mature paths because there’s a lot of overlap between the communities, a lot of people are interested in both Rust and WebAssembly at the same time,” he said. “Plus, it’s possible to blend WebAssembly with JavaScript so it’s not an either-or situation necessarily.” That in turn will yield new high-performing applications running on the web and mobile, Gardner added. “You’re not going to see necessarily a ‘Made with WebAssembly’ banner show up on websites, or anything along those lines, but you are going to see some very high-performing applications running on the web and then also on mobile, built off of WebAssembly,” he said. ... “Organizations are trying to automate and improve their test automation, and part of that shift to shipping faster means, you have to find ways to optimize what you’re doing,” DeSanto said.


What is FinOps? Your guide to cloud cost management

“FinOps brings financial accountability — including financial control and predictability — to the variable spend model of cloud,” says J.R. Storment, executive director of the FinOps Foundation. “This is increasingly important as cloud spending makes up ever more of IT budgets.” It also enables organizations to make informed trade-offs between speed, cost, and quality in their cloud architecture and investment decisions, Storment says. “And organizations get maximum business value by helping engineering, finance, technology, and business teams collaborate on data-driven spending decisions,” he says. Aside from bringing together the key people who can help an organization gain better control of its cloud spending, FinOps can help reduce cloud waste, which IDC estimates between 10% to 30% for organizations today. “Moving from show-back cloud accounting, where IT still pays and budgets for cloud spending, to a charge-back model, where individual departments are accountable for cloud spending in their budget, is key to accelerating savings and ensuring only necessary cloud projects are implemented,” Jensen says.


IoT Analytics: Making Sense of Big Data

The principles that guide enterprises in the way they approach IoT analytics data are: Data is an asset: Data is an asset that has a specific and measurable value for the enterprise.Data is shared: Data must be shared across the enterprise and its business units. Users have access to the data that is necessary to perform their activities; Data trustees: Each data element has trustees accountable for data quality; Common vocabulary and data definitions: Data definition is consistent, and the taxonomy is understandable throughout the enterprise; Data security: Data must be protected from unauthorised users and disclosure; Data privacy: Privacy and data protection is considered throughout the life cycle of a Big Data project. All data sharing conforms to the relevant regulatory and business requirements; and Data integrity and the transparency of processes: Each party to a Big Data analytics project must be aware of and abide by their responsibilities regarding the provision of source data and the obligation to establish and maintain adequate controls over the use of personal or other sensitive data.


Reframing our understanding of remote work

The remote and hybrid work trend is the most disruptive change in how businesses work since the introduction of the personal computer and mobile devices. Then, like now, the conversation was lost in the weeds. Should we allow PCs? Should we allow employees to bring their own devices? Should we issue pagers, feature phones, then smartphones to employees or let them use their own? In hindsight, it's clear that all these concerns were utterly pointless. The PC revolution was a tsunami of certainty that would wash away old ways of doing everything. So the only question should have been: How do we ensure these devices are empowering, secure, and usable? All focus should have been on the massive learning curve by organizations (what's the best way to deploy, update, secure, provision, purchase, and network these devices for maximum benefit) And by end users. In other words, while everyone gnashed their teeth over whether to allow devices — or what kind or level of devices to allow — the energy could have been much better spent realizing the entire issue was about skills and knowledge.


Developing Successful Data Products at Regions Bank

Misra said that there are a few especially important components involved in the success of the data product partner role and the discipline of product management for analytics and AI initiatives. One is to ensure that the partner role is strategic, proactive, and focused on critical business needs, and not simply an on-demand service within the company. All data products should address a critical business priority for partners and, when deployed, should deliver substantial incremental value to the business. The teams that work on the products should employ agile methods and include data scientists, data managers, data visualization experts, user interface designers, and platform and infrastructure developers. Misra is a fan of software engineering disciplines — systematic techniques for the analysis, design, implementation, testing, and maintenance of software programs — and believes that they should be employed in data science and data products as well. This product orientation also requires that there’s a big-picture focus, not just by the data product partners but by everyone on the product development teams. 


Amplified security trends to watch out for in 2023

Cybercriminals target employees across different industries to surreptitiously recruit them as insiders, offering them financial enticements to hand over company credentials and access to systems where sensitive information is stored. This approach isn’t new, but it is gaining popularity. A decentralized work environment makes it easier for criminals to target employees through private social channels, as the employee does not feel that they are being watched as closely as they would in a busy office setting. Aside from monitoring user behavior and threat patterns, it’s important to be aware of and be sensitive about the conditions that could make employees vulnerable to this kind of outreach – for example, the announcement of a massive corporate restructuring or a round of layoffs. Not every employee affected by a restructuring suddenly becomes a bad guy, but security leaders should work with Human Resources or People Operations and people managers to make them aware of this type of criminal scheme, so that they can take the necessary steps to offer support to employees who could be affected by such organizational or personal matters. 


What is the Best Cloud Strategy for Cost Optimization?

More often than not, some resources are underutilized. This usually stems from overbudgeting for certain processes. For instance, a cloud computing instance may be underutilized to the point that it uses less than 5% of its CPU. Note that with cloud services, you pay for the storage and computing power, rather than the space. In the instance highlighted above, it’s clear that there’s a case of significant waste. In your bid to optimize costs, it’s best to identify these idle instances and consolidate the workload into fewer cloud instances. It can be difficult to understand how much power the system uses without adequate visualization. Heat maps are highly useful in cloud cost optimization. This infographic tool highlights computing demand and consumption’s high and low points. This data can be useful in establishing stop and start times for cost reduction. Visual tools like heat maps can help you identify clogged-up sections before they become problematic. When a system load becomes one-directional, you know it’s time to adjust and balance it before it disrupts your processes.


Server supply chain undergoes shift due to geopolitical risks

Adding to the motivation to exit China and Taiwan was the saber rattling and increasingly bellicose tone from Beijing to Taiwan, along with fairly severe sanctions on semiconductor sales from the U.S. Department of Commerce. This has led some US-based cloud service providers, such as Google, AWS, Meta, and Microsoft, to look at adding server production lines outside Taiwan as a precautionary measure, according to TrendForce. There have been a number of other moves as well. In the US, Intel is spending $20 billion on an Arizona fab and another $20 billion on fabs in Ohio. TSMC is spending $40 billion on fabs in Arizona as well, and Apple is moving production to the US, Mexico, India, and Vietnam. TrendForce also noted a phenomenon it calls “fragmentation” as an emerging model in the management of the server supply chain. It used to be that server production and the assembly process were handled entirely by ODMs. In the future, the assembly task of a server project will be given to not only an ODM partner but also a system integrator.


What’s the Difference Between Kubernetes and OpenShift?

Red Hat provides automated installation and upgrades for most common public and private clouds, allowing you to update on your own schedule and without disrupting operations. This process is perhaps one of the biggest differentiations between OpenShift and the standard Kubernetes environment, as it provides a runbook for updates and uses this to avoid disruption. If you’re running a cluster of OpenShift servers, you will be able to upgrade while applications continue to run, with OpenShift’s orchestration tools moving nodes and containers as required. When it comes to managed on-premises Kubernetes OpenShift is perhaps best compared with Microsoft’s Azure Arc tooling, which brings Azure’s managed Kubernetes to on-premises, using the Azure Portal as a management tool, or VMware’s Tanzu. They are all based on certified Kubernetes, adding their own management tooling and access control. OpenShift is more a sign of Kubernetes’ importance to enterprise application development than anything else. 


CISO Budget Constraints Drive Consolidation of Security Tools

Piyush Pandey, CEO at Pathlock, a provider of unified access orchestration, says budget constraints will affect both solution purchases, but also potentially the staff required to run them. “This will likely drive the consolidation of solutions that span across multiple organizations, such as access, compliance, and security tools,” he says. “This consolidation into platforms will help organizations prioritize their resources -- time, money, and people.” He says organizations that focus on comprehensive solutions can drive more synergies across different departments to be compliant. “This won't just be about cost savings, however -- it will also help reduce the complexity of their infrastructure, eliminating multiple standalone tools and solutions,” Pandey adds. Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, explains the global financial downturn has hit multiple sectors, which means budgets are short overall. “The challenge will be keeping cybersecurity postures strong, even in the face of budget cuts,” he says. 



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - December 26, 2022

Nvidia still crushing the data center market

EVGA CEO Andy Han cited several grievances with Nvidia, not the least of which was that it competes with Nvidia. Nvidia makes graphics cards and sells them to consumers under the brand name Founder’s Edition, something AMD and Intel do very little or not at all. In addition, Nvidia’s line of graphics cards was being sold for less than what licensees were selling their cards. So not only was Nvidia competing with its licensees, but it was also undercutting them. Nvidia does the same on the enterprise side, selling DGX server units (rack-mounted servers packed with eight A100 GPUs) in competition with OEM partners like HPE and Supermicro. Das defends this practice. “DGX for us has always been sort of the AI innovation vehicle where we do a lot of item testing,” he says, adding that building the DGX servers gives Nvidia the chance to shake out the bugs in the system, knowledge it passes on to OEMs. “Our work with DGX gives the OEMs a big head-start in getting their systems ready and out there. So it's actually an enabler for them.” But both Snell and Sag think Nvidia should not be competing against its partners. “I'm highly skeptical of that strategy,” Snell says. 


A Look Ahead: Cybersecurity Trends to Watch in 2023

Multifactor authentication was once considered the gold standard of identity management, providing a crucial backstop for passwords. All that changed this year with a series of highly successful attacks using MFA bypass and MFA fatigue tactics, combined with tried-and-true phishing and social engineering. That success won’t go unnoticed. Attackers will almost certainly increase multifactor authentication exploits. "Headline news attracts the next wave of also-rans and other bad actors that want to jump on the newest methods to exploit an attack," Bird says. "We're going to see a lot of situations where MFA strong authentication is exploited and bypassed, but it's just unfortunately a reminder to us all that tech is only a certain percentage of the solution." Ransomware attacks have proliferated across public and private sectors, and tactics to pressure victims into paying ransoms have expanded to double and even triple extortion. Because of the reluctance of many victims to report the crime, no one really knows whether things are getting better or worse. 


Why zero knowledge matters

In a sense, zero knowledge proofs are a natural elaboration on trends in complexity theory and cryptography. Much of modern cryptography (of the asymmetric kind) is dependent on complexity theory because asymmetric security relies on using functions that are feasible in one form but not in another. It follows that the great barrier to understanding ZKP is the math. Fortunately, it is possible to understand conceptually how zero knowledge proofs work without necessarily knowing what a quadratic residue is. For those of us who do care, a quadratic residue of y, for a value z is: . This rather esoteric concept was used in one of the original zero knowledge papers. Much of cryptography is built on exploring the fringes of math (especially factorization and modulus) for useful properties. Encapsulating ZKP's complex mathematical computations in libraries that are easy to use will be key to widespread adoption. We can do a myriad of interesting things with such one-way functions. In particular, we can establish shared secrets on open networks, a capability that modern secure communications are built upon


Rust Microservices in Server-side WebAssembly

Rust enables developers to write correct and memory-safe programs that are as fast and as small as C programs. It is ideally suited for infrastructure software, including server-side applications, that require high reliability and performance. However, for server-side applications, Rust also presents some challenges. Rust programs are compiled into native machine code, which is not portable and is unsafe in multi-tenancy cloud environments. We also lack tools to manage and orchestrate native applications in the cloud. Hence, server-side Rust applications commonly run inside VMs or Linux containers, which bring significant memory and CPU overhead. This diminishes Rust’s advantages in efficiency and makes it hard to deploy services in resource-constrained environments, such as edge data centers and edge clouds. The solution to this problem is WebAssembly (WASM). Started as a secure runtime inside web browsers, Wasm programs can be securely isolated in their own sandbox. With a new generation of Wasm runtimes, such as the Cloud Native Computing Foundation’s WasmEdge Runtime, you can now run Wasm applications on the server. 


How to automate data migration testing

Testing with plenty of time before the official cutover deadline is usually the bulk of the hard work involved in data migration. The testing might be brief or extended, but it should be thoroughly conducted and confirmed before the process is moved forward into the “live” phase. An automated data migration approach is a key element here. You want this process to work seamlessly while also operating in the background with minimal human intervention. This is why I favor continuous or frequent replication to keep things in sync. One common strategy is to run automated data synchronizations in the background via a scheduler or cron job, which only syncs new data. Each time the process runs, the amount of information transferred will become less and less. ... Identify the automatic techniques and principles that will ensure the data migration runs on its own. These should be applied across the board, regardless of the data sources and/or criticality, for consistency and simplicity’s sake. Monitoring and alerts that notify your team of data migration progress are key elements to consider now. 


Clean Code: Writing maintainable, readable and testable code

Clean code makes it easier for developers to understand, modify, and maintain a software system. When code is clean, it is easier to find and fix bugs, and it is less likely to break when changes are made. One of the key principles of clean code is readability, which means that code should be easy to understand, even for someone who is not familiar with the system. To achieve this, developers should e.g. use meaningful names for variables, functions, and classes. Another important principle of clean code is simplicity, which means that code should be as simple as possible, without unnecessary complexity. To achieve this, developers should avoid using complex data structures or algorithms unless they are necessary, and should avoid adding unnecessary features or functionality. In addition to readability and simplicity, clean code should also be maintainable, which means that it should be easy to modify and update the code without breaking it. To achieve this, developers should write modular code that is organized into small, focused functions, and should avoid duplication of code. Finally, clean code should be well-documented. 


Artificial intelligence predictions 2023

Synthetic data – data artificially generated by a computer simulation – will grow exponentially in 2023, says Steve Harris, CEO of Mindtech. “Big companies that have already adopted synthetic data will continue to expand and invest as they know it is the future,” says Harris. Harris gives the example of car crash testing in the automotive industry. It would be unfeasible to keep rehearsing the same car crash again and again using crash test dummies. But with synthetic data, you can do just that. The virtual world is not limited in the same way, which has led to heavy adoptoin of synthetic data for AI road safety testing. Harris says synthetic data is now being used in industries he never expcted in order to improve development, services and innnovation. ... Banks will use AI more heavily to give them a competitive advantage to analyse the capital markets and spot opportunities. “2023 is going to be the year the rubber meets the road for AI in capital markets, says Matthew Hodgson, founder and CEO of Mosaic Smart Data. “Amidst the backdrop of volatility and economic uncertainty across the globe, the most precious resource for a bank is its transaction records – and within this is its guide to where opportunity resides. 


Group Coaching - Extending Growth Opportunity Beyond Individual Coaching

First, as a coach since our focus is on the relationship and interactions between the individuals, we don’t coach individuals in separate sessions. Instead, we bring them together as the group/team that they are part of and coach the entire group. Anything said by one member of the team is heard by everyone right there and then. The second building block is holding the mirror to the intangible entity mentioned above. To be accurate, holding the mirror is not a new skill for proponents of individual coaching, but it takes a significantly different approach in group coaching and has a more pronounced impact here. Holding the mirror here means picking up the intangibles and making the implicit explicit, for example, sensing the mood in the room, or reading the body language, drop/increase in energy, head nods, smiles, drop in shoulders, emotions etc. and playing back to the room your observation (sans judgement obviously). Making the intangibles explicit is an important step in group coaching - name it to tame it, if you will. The third building block is the believing and trusting in the group system that it is intelligent and self-healing.


Hybrid cloud in 2023: 5 predictions from IT leaders

Hood says this trend is fundamentally about operators accelerating their 5G network deployments while simultaneously delivering innovative edge services to their enterprise customers, especially in key verticals like retail, manufacturing, and energy. He also expects growing use of AL/ML at the edge to help optimize telco networks and hybrid edge clouds. “Many operators have been consuming services from multiple hyperscalers while building out their on-premise deployment to support their different lines of business,” Hood says. “The ability to securely distribute applications with access to data acceleration and AI/ML GPU resources while meeting data sovereignty regulations is opening up a new era in building application clouds independent of the underlying network infrastructure.” ... “Given a background of low margins, limited budgets, and the complexity of IT systems required to keep their businesses operating, many retailers now understandably rely on a hybrid cloud approach to help reduce costs whilst delivering value to their customers,” says Ian Boyle, Red Hat chief architect for retail.


Looking ahead to the network technologies of 2023

The growth in Internet dependence is really what’s been driving the cloud, because high-quality, interactive, user interfaces are critical, and the cloud’s technology is far better for those things, not to mention easier to employ than changing a data center application would be. A lot of cloud interactivity, though, adds to latency and further validates the need for improvement in Internet latency. Interactivity and latency sensitivity tend to drive two cloud impacts that then become network impacts. The first is that as you move interactive components to the cloud via the Internet, you’re creating a new network in and to the cloud that’s paralleling traditional MPLS VPNs. The second is that you’re encouraging cloud hosting to move closer to the edge to reduce application latency. ... What about security? The Internet and cloud combination changes that too. You can’t rely on fixed security devices inside the cloud, so more and more applications will use cloud-hosted instances of security tools. Today, only about 7% of security is handled that way, but that will triple by the end of 2023 as SASE, SSE, and other cloud-hosted security elements explode. 



Quote for the day:

"Leadership is unlocking people's potential to become better." -- Bill Bradley

Daily Tech Digest - December 25, 2022

How Value Stream Management is Fueling Digital Transformation

One of the world’s largest aerospace companies, The Boeing Company has been employing VSM for several years now. Through VSM, they optimized resource utilization and reduced waste. “We always thought we were doing a good job of producing value until we started to work through this,” explained Lynda Van Vleet, Boeing’s portfolio management systems product manager. “In our first two years, we saved hundreds of millions of dollars. But that wasn’t our goal. I think a lot of organizations look at this as a way of saving money because you usually do, but if you start out looking at it as a way of creating value, that just comes along with it.” The organization changed legacy approaches to product management and project investment. This enabled them to speed up their ability to innovate and pursue digital transformation. ... By establishing cross-team visibility, leaders were able to spot redundancies. For example, they saw how different IT organizations had their own analytics teams. “We had people in every organization doing the same thing,” explained Van Vleet. Boeing’s executives established a single analytics team to realign the work more efficiently and improve consistency.


Rethinking Risk After the FTX Debacle

The threat surface for FTX clients wasn't just about protecting their FTX passwords or hoping the exchange wouldn't get hacked like the Mt. Gox bitcoin exchange and so many others did. Instead, their portfolios were at risk of implosions over assets and investments they had never heard of. That is the definition of risk: having your hard-earned money and investments merged with a toxic mix of super-risky sludge. That’s a helpless place to be. After more than 20 years in cybersecurity, it is difficult not to think about risk exposure and threat management in a case like this. Security teams are dealing with something much more akin to SBF than Madoff. There is no singular threat facing an enterprise today. Instead, it is a constellation of assets, devices, data, clouds, applications, vulnerabilities, attacks, and defenses. Security teams' biggest weakness is that they are being asked to secure what they can neither see nor control. Where is our critical data? Who is accessing it, and who needs access? Every day in cybersecurity, the landscape of what needs to be protected changes. Applications are updated. Data is stored or in transit among multiple clouds. Users change. Every day represents new challenges.


Quantum Machine Learning: A Beginner’s Guide

Welcome to the world of quantum machine learning! In this tutorial, we will walk you through a beginner-level project using a sample dataset and provide step-by-step directions with code. By the end of this tutorial, you will have a solid understanding of how to use quantum computers to perform machine learning tasks and will have built your first quantum model. But before we dive into the tutorial, let’s take a moment to understand what quantum machine learning is and why it is so exciting. Quantum machine learning is a field at the intersection of quantum computing and machine learning. It involves using quantum computers to perform machine learning tasks, such as classification, regression, and clustering. Quantum computers are powerful machines that use quantum bits (qubits) instead of classical bits to store and process information. This allows them to perform certain tasks much faster than classical computers, making them particularly well-suited for machine learning tasks that involve large amounts of data.


Importance of anti-money laundering regulations among prosumers for a cybersecure decentralized finance

To the extent of our knowledge, this is the first study to assess this possibility with supportive evidence from a game theoretical perspective. In addition, our study examines and sheds light on the importance of AML regulations among prosumers in fulfilling the institutional role of preventing cyberattacks by the decentralized governance in a blockchain-based sharing economy. This paper focuses on prosumers as they undertake institutional roles in blockchain-based sharing economy models (Tan & Salo, 2021). In fact, most hackers are prosumers and may serve as end-users as well as developers. Therefore, their impact can be significant in setting the tone for safety and security of a blockchain-based sharing economy. Last but not least, our paper provides policy suggestions for creating effective cybersecurity efforts in permissionless DeFi without relinquishing its decentralized nature. Our first policy suggestion is the integration of artificial intelligence (AI) employing machine learning (ML) techniques to promptly flag, track, and recover stolen tokens from offenders.


Conscious Machines May Never Be Possible

Pondering this question, it’s important to recognize that intelligence and consciousness are not the same thing. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals likely have conscious experiences without being particularly smart, at least by our questionable human standards. If the great-granddaughter of LaMDA does reach or exceed human-level intelligence, this does not necessarily mean it is also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures. Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.


Six Ways To Pivot Hiring Strategies To Attract Cybersecurity Talent

To recruit and retain cybersecurity talent, you should change your approach with these six strategies. Learn from past hirings, whether successful or not: Not every hire will turn out as expected, but you can learn from these previous decisions. Remember, an interview is a conversation: You and the candidate have a lot to learn about each other. You could lose a good hire if interviews are tightly controlled and formal. In the “real world” of cybersecurity, communication and collaboration are critical, so that’s the type of environment you should create in the hiring process. Don’t rush to hire: Even if you are understaffed and have vacancies open for some time, you’ll lose more time and money by hiring the wrong people. Be patient in the process. Find someone who matches your culture: Someone can be a brilliant technical candidate but still be wrong for your organization. In many circumstances, culture fit means someone with soft skills and wants to grow and evolve. Keep in mind that a highly motivated individual is teachable: They can develop their soft and technical skills under you. 


DataOps as a holistic approach to data management

The DataOps approach, which takes its cue from the DevOps paradigm shift, is focused on increasing the rate at which software is developed for use with large data processing frameworks. DataOps also encourages line-of-business stakeholders to collaborate with data engineering, data science, and analytics teams in an effort to reduce silos between IT operations and software development teams. This ensures that the organization’s data may be utilized in the most adaptable and efficient manner to provide desirable results for business operations. DataOps integrates many facets of IT, such as data development, data transformation, data extraction, data quality, data governance, data access control, data center capacity planning, and system operations, because it encompasses so much of the data lifecycle. Typically, a company’s chief data scientist or chief analytics officer leads a DataOps team comprised of specialists like data engineers and analysts. Frameworks and related toolsets exist to support a DataOps approach to collaboration and greater agility, but unlike DevOps, there are no software solutions dedicated to “DataOps.”


How edge-to-cloud is driving the next stage of digital transformation

The thing about computing at the edge is that it needs to run at the speed of life. A self-driving car can't take the time to send off a query and await a response when a truck swerves in front of it. It has to have all the necessary intelligence in the vehicle to decide what action to take. While this is an extreme example, the same is true of factory processes and even retail sales. Intelligence, data analysis, and decision making must be available without a propagation delay, and therefore must live at the edge. Of course, all of this adds to the management overhead. Now you have management consoles from a large number of vendors to contend with, plus those for your services on-premises, and then all the stuff up in the cloud. This is where integration is necessary, where it becomes absolutely essential that all your IT resources – from the edge all the way up to the cloud – need to be managed from a single, coherent, manageable interface. It's not just about ease of use. It's about preventing mistakes and being able to keep track of and mitigate threats. 


Cloud to edge: NTT multicloud platform fuels digital transformation

The platform is the heart of our Multicloud as a Service offering because it provides visibility, control and governance across all clouds and for all workloads. It enhances the cloud providers’ native control planes with AI-backed insights for anomaly detection, correlation forecasting, automated operations, agile deployments and more, without limiting direct access to the cloud. These elements give organizations more comfort in consuming these services in a way that is closely aligned with their needs. ... This can be difficult for many clients to do themselves because most have managed their technology in a particular way for years and now have to make a step change into the cloud paradigm. But NTT has operated cloud platforms and delivered managed services across multiple industries and technologies for more than two decades, so we’re perfectly placed to help them make the leap. Some of the components of our platform may be familiar, but how we bring them together is unique. Our many years of operating experience have been baked into this platform to make it a true differentiator.


Top Decentralized Finance (DeFi) Trends in 2023

Governance tokens give individuals the authority to vote on blockchain project development and management-related matters. By having the power to have a say in blockchain project operations, it becomes possible to ensure the goals/interests of token holders are the same or similar. For example, a DeFi project like Compound lets users use native tokens for various farm or rent income schemes. It has its Token (COMP) that governs the Compound DeFi protocol's growth. ... It will soon be possible to watch the development of new social networks by creators and followers. New immersive fan economy fueled by social tokens in the metaverse can revolutionize digital monetization. Communities or celebrities can monetize their brand further by using social tokens. They will create bidirectional relationships between artists and customers with reciprocal benefits. Individuals, rather than organizations, become the agents of creativity in a dispersed collaborative paradigm. It is a unified and linked metaverse where tokenized NFTs may contain digital data rights while storing, tracking, and enforcing those rights.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - December 24, 2022

3 takeaways to boost your enterprise architect career in 2023

Many people confuse business architecture and IT systems architecture, but they are different practices. Business architects transform business ideas and potential projects that align with an organization's strategies and influence a company's performance as a leader in their market, says Renee Biggs. IT architects contribute to the technology components of business architecture or enterprise architecture. ... Finding the right balance between documentation and implementation is part of an architect's job. But what do you do if your culture values performance a little too much, asks Evan Stoner, a senior specialist solutions architect. His advice is to "deliver what is needed." This means uncovering the current needs and designing for them. Change doesn't happen overnight. The future is uncertain, so architectures are constantly evolving. Software development is a key example, say Neal Fishman and Paul Homan; you must continually redevelop business applications to address new opportunities. But constant change can produce substandard solutions that require frequent revision.


Risk and resilience: compliance in 2023

By starting to prepare early, David Tattam, Chief Research & Content Officer and co-founder of Protecht expects that “businesses will start to realise the tangible benefits a holistic actionable view of risk provides. Smart businesses will track a measurable baseline of risk efficiencies over time, in line with their profit and returns strategy, to demonstrate the ROI of their risk management program”. Some legislation may have complicated and far-reaching impacts. Lee Biggenden, COO and Co-Founder of Nephos Technologies has noticed that “there are ongoing discussions in the European Union about open data platforms which, if passed, could revolutionise how data is used, shared and owned. Anticipated to come into force in 2023, it will have a huge impact on businesses who will need to put the controls and visibility in place over their data regardless of what industry they're in. Although on paper this may seem like a step in the right direction, it does raise concerns about personal privacy as third-party data sharing is a key part of the proposed act. We are all guilty of clicking privacy boxes without reading the full terms and conditions”. 


The Metaverse Doesn’t Have a Leg to Stand On

Zuckerberg’s not the only mark here. Microsoft also placed a bet on the metaverse (the avatars in its iteration, Mesh, also lack legs). In the past few years, a comically wide variety of companies have hired their own “chief metaverse officer,” from Disney and Procter & Gamble to the Creative Artists Agency and the accounting firm Prager Metis. Meta placed its bet on the metaverse in the flashiest way, changing its name, spending all that dough, et cetera, but it’s not alone in its conviction that these virtual worlds are the inevitable future. Even writer Neal Stephenson, who coined the word metaverse in his 1992 novel Snow Crash, founded an actual metaverse company in 2022. In the past few years, metaverse startups like Decentraland and the Sandbox grabbed venture capital interest by hyping themselves as hubs for a new NFT-fueled economy. Despite these companies’ hefty valuations, they have remained decidedly niche. (Refuting a third-party report that it had only 38 active users one day, Decentraland said it had an average of 8,000 daily active users—which is still tiny.) Why did Zuckerberg gamble his business on something so wobbly, so literally legless?


The year 2022 for Women in Tech

The results of the Toptal survey are not able to clearly indicate that certain progress has been made in the last year. These facts reveal that a bigger change still needs to be achieved. There are certain steps and actions that should be taken by everyone involved in the ecosystem, in order to overcome the existing gender inequality in the world and in the domain of technology and STEM specifically. The first step is to break the stereotypes. The stereotypical thinking is at the root of the problem of misconception of the role of women in society, the jobs and activities that are ‘appropriate’ for them. The second step is bridging the gender gap and it presents the next big challenging shift that must be called for. The gender gap exists notably in career opportunities and this is easily noticed when comparing the number of women studying and graduating in the STEM field and the number of women who manage to land jobs in the tech field and achieve real-life professional realization in the tech area. A significant part of the existing gender gap is the pay gap. Still, the gender pay gap provenly exists even in the most advanced countries. 


The Tech:Forward recipe for a successful technology transformation

Having an approach that is both this comprehensive and detailed was instrumental in aligning one large OEM’s tech-transformation goals. Previous efforts had stalled, often because of competing priorities across various business units, which frequently led to a narrow focus on each unit’s needs. One might want to push hard for cloud, for example, while another wanted to cut costs. Each unit would develop its own KPIs and system diagnostics, which made it virtually impossible to make thoughtful decisions across units, and technical dependencies between units would often grind progress to a halt. The company was determined to avoid making that mistake again. So it invested the time to educate stakeholders on the Tech:Forward framework, detail the dependencies within each part of the framework, and review exactly how different sequencing models would impact outcomes. In this way, each function developed confidence that the approach was both comprehensive and responsive to its needs. Meetings with the CFO


3 cloud architecture best practices for industry clouds

Make no assumptions about the security of industry-specific clouds. Those sold by the larger cloud providers may be secure as stand-alone services; however, they could become a security vulnerability when integrated and operated directly within your solution. The best practice here is to build and design security into your custom applications that leverage industry clouds. Also, do so with integration in mind so no new vulnerabilities are opened. You can take two things that are assumed to be secure independently, and then add dependencies that entirely change the security profile. ... However, you’ll often find the best-of-breed option is on another cloud or perhaps from an independent industry cloud provider that decided to go it alone. The best practice here is to not limit the industry-specific services under consideration. As time goes on, there will be dozens of services to do tasks such as risk analytics for investment banking, for example. Picking the less optimized choice means you’ll lower the value that’s returned to the business. In other words, you make less ROI when you make less optimized decisions.


Benefits of A Technology-Enabled Risk Assessment Process

Organizations need effective risk assessments to manage resources and make informed business decisions that enable growth. Performing an effective risk assessment means going beyond an annual, check-the-box activity by implementing a risk assessment process that can yield actionable results and findings and serve as a business intelligence tool to inform risk management strategies. In today’s rapidly evolving market, banks, financial services companies, payment services providers, and  fintechs  are focused on improving their processes to create dynamic and efficient digital experiences for their customers. But the improvements do not extend just to customers. ... Technology-enabled solutions – which support the automated or semi-automated collection of data, scoring of inherent risk, mapping of controls, and scoring of residual risk – can help organizations streamline and add efficiencies to their risk assessments and provide a better understanding of real-time risk than the frequently outdated, once-a-year process can. Organizations then can use the valuable business intelligence obtained through the risk assessment process to increase revenue and identify new business opportunities for further growth.


Making the case for an Enterprise Architect in Digital Transformation programs

Large transformation work need to be addressed in 3 buckets – Plan, Build & Run. While “Build” is the largest portion of investments in any transformation program, it is obvious that organizations are required to safeguard whatever has been built with minimal “Run” budgets. “Build” and “Run” are cyclical in nature. For example, you build, then you maintain (run), then you either build more or something new and then maintain (run) that more or something new. So it is pretty obvious that leaders responsible for large transformations think of Build as the starting point and transition to Run as the end. It is senior leaders however that need to see things being built (and run) as building blocks of a vision or the journey. This is the Planning function (which is strategic). The “Plan” function needs to be executed by someone who understands business vision and maps the journey to that vision. S/he does that by identifying business capabilities and value chain (not business process), and then developing the strategy as building blocks. The skill required to do this is called “Enterprise Architecture”.


EU Cyber Resilience Act: Good for Software Supply Chain Security, Bad for Open Source?

With all of the good that the CRA brings in evolving the regulatory conversations past SBOMs, the current draft has some problematic language that could actually hurt the future of open source. But first, what it gets right about open source. Page 15, Paragraph 10 attempts to exempt, or carve out, open source software (OSS) from the regulations, saying: In order not to hamper innovation or research, free and open-source software developed or supplied outside the course of a commercial activity should not be covered by this Regulation. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. This is good, even great. OSS and project maintainers should be exempt from these regulations that apply liability, as this will have the effect of quashing innovation and sharing of ideas via code. However, in the same paragraph, the CRA attempts to draw a line between commercial and non-commercial use of open source software:


Finding the Right Data Governance Model

It is critical to distinguish the term “governance” from the term “management” in the context of Data Governance. It should be noted that the principal difference between “governance” and “management” is that governance refers to the decisions that must be made and who must make them. This is to ensure effective resource allocation and management of data operations. On the other hand, Data Management involves implementing those decisions that arise from assessing and monitoring either existing controls or the environment that includes advancements in technology and the market. The activities required for Data Governance can, therefore, be distinguished from those needed for Data Management since management is influenced by governance. Data Governance is oversight of Data Management activities to ensure that policy and ownership of data are enforced in the organization. The emphasis is on formalizing the Data Management function and associated data ownership roles and responsibilities.



Quote for the day:

"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn

Daily Tech Digest - December 23, 2022

Why the industrial metaverse will eclipse the consumer one

The industrial metaverse is further ahead on the 3D front, with simulations and digital twins. The industrial metaverse is ahead on the standards front, with companies like Nvidia pushing potential standards such as Universal Scene Description (USD) through its Omniverse platform. USD has been characterized as doing for the metaverse what HTML did for the internet. In this regard, USD can lead to greater interoperability, [connecting] formerly disparate applications or ecosystems … to make workflows more seamless. ... Digital assets, similarly, are typically locked to a particular ecosystem, servicer or game. Many of the most transformative opportunities in the consumer space will also come with mainstream smart glasses, which are still years away before we see a stronger impact. The enterprise and industrial metaverses are also better grounded in ROI, meaning more trials and initial deployments have a higher potential to succeed or lead to more adoption compared to consumer efforts, which have seen more pushback, such as the addition of NFTs in games in Western markets [gaining] limited traction.


Surviving the Incident

The next step to the IR playbook is to identify the "crown jewels" of the organization — the critical systems, services, and operations that, if impacted by a cyber event, would disrupt business operations and cause a loss of revenue. Similarly, understanding the collected data type, how it is transmitted and stored, and who should access it must be mapped to ensure data security. Identifying and mapping critical systems can be accomplished through penetration tests, risk assessments, and threat modeling. A risk assessment is often the first tool to identify potential attack vectors and prioritize security events. However, to achieve a proactive stance, organizations are increasingly leveraging threat intelligence and modeling to identify and address vulnerabilities and security gaps early on before a known attack occurs. The primary goal is to identify weaknesses or vulnerabilities with assets to reduce the attack surface and close all the security gaps. This guide will focus on web application security as our attack scenario. Why web application security? 


Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know

Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example. AGI, however, would function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on vast amounts of data. Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters. As more and more data are fed through the network, the parameters stabilise; the final outcome is the “trained” neural network, which can then produce the desired output on new data – for example, recognising whether an image contains a cat or a dog. The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, readjusting vast numbers of parameters in each run thanks to the capabilities of large cloud-computing infrastructures.


Metaverse Security Concerns Coming Into Focus as Businesses Plan For “Virtual Reality” Futures

Organizations smell potential here, with 23% responding that they are already developing initiatives even as basic specifications are still firming up. Of the respondents that expressed a desire to do business in the metaverse, the leading interest (44%) was customer engagement opportunities. Other popular areas are learning/training measures and workplace collaboration. But when asked about their concerns about expanding into this new area, respondents said that metaverse security was item #1 on the list. By and large, today’s security solutions have not yet considered the prospect of metaverse integration. Nevertheless, 86% of the respondents said that they would feel comfortable sharing user personal information between different metaverse services. Security providers may be waiting to see what users settle on in the metaverse before tailoring their products accordingly. Of the products available thus far, online games are the only ones drawing mass amounts of users (particularly the pre-existing Roblox and Fortnite) along with simple 3D world chat apps that allow users to appear as an avatar.


What’s next for AI

The big companies that have historically dominated AI research are implementing massive layoffs and hiring freezes as the global economic outlook darkens. AI research is expensive, and as purse strings are tightened, companies will have to be very careful about picking which projects they invest in—and are likely to choose whichever have the potential to make them the most money, rather than the most innovative, interesting, or experimental ones, says Oren Etzioni, the CEO of the Allen Institute for AI, a research organization. That bottom-line focus is already taking effect at Meta, which has reorganized its AI research teams and moved many of them to work within teams that build products. But while Big Tech is tightening its belt, flashy new upstarts working on generative AI are seeing a surge in interest from venture capital funds. Next year could be a boon for AI startups, Etzioni says. There is a lot of talent floating around, and often in recessions people tend to rethink their lives—going back into academia or leaving a big corporation for a startup, for example.


How to Innovate by Introducing Product Management in SMB and Non-Tech Companies

It’s common to find product managers and product owners in SaaS, technology, ecommerce, retail, and other B2C companies. Leadership in these companies long realized that understanding markets, determining product-market fits, defining customer personas, and understanding value propositions are all key to developing minimally viable solutions and delivering ongoing product enhancements. But identifying product managers and owners in non-tech companies, B2B businesses, SMBs, and the government remains a long-running work in progress. To start innovating, it comes down to transforming from stakeholder-led backlogs to product-managed, market-driven roadmaps. Tech, media, and ecommerce companies figure this out right away because chasing stakeholder-driven features often yields subpar results. More traditional businesses are likely to misdiagnose the problems with stakeholder-driven backlogs as a technology execution or platform issue. But there are a few secrets to making product management work even in the most traditional businesses.


IT Job Market: 2022's Wild Ride and What to Expect for 2023

Even as those layoff announcements were rolling in, the US Bureau of Labor Statistics job report for October showed a strong job market for tech pros and continued growth for remote jobs. In November that growth continued with IT industry association CompTIA reporting that US tech companies added 14,400 workers during the month, marking two consecutive years of monthly job growth in the sector. Tech jobs in all industry sectors increased by 137,000 positions. And while job postings for future hiring slipped in November, they still totaled nearly 270,000. As the tech sector heads into a changed 2023 employment market, it’s unclear how all these mixed signals will play out, although experts are starting to weigh in on best practices. Employers are likely looking carefully at budgets and head counts. But it will be a challenging line to walk. Employers have spent the past few years investing in employee experience programs and focusing on retaining their valuable talent. An abrupt change in direction such as mass layoffs will likely sour companies’ reputations as employers.


Inside the Next-Level Fraud Ring Scamming Billions Off Holiday Retailers

Besides the operation being stacked with technology know-how, Michael Pezely, Signifyd's director of risk intelligence, tells Dark Reading that the e-commerce threat group has sheer speed and volume of scam transactions on its side. "E-commerce orders — particularly at the enterprise level — arrive at dizzying speed," Pezely says. "Signifyd, for instance, processed as much as $42 million an hour in orders during Cyber Week. It would be virtually impossible for a human team to review that volume of orders for signs of fraud." Pezely added that merchants are on the lookout for goods being shipped to a foreign country, but this group of scammers places orders that appear to originate from the US and ship to US addresses. "Furthermore, if a merchant is relying on only its own transaction data, there likely will be a lag between the time a fraud attack begins and when it is recognized," Pezely explains. "Without having the benefit of seeing millions of transactions across thousands of merchants, a novel fraud attack might not be in plain sight for some time."


Protecting your organization from rising software supply chain attacks

The reason for the continued bombardment, said Moore, is increasing reliance on third-party code (including Log4j). This makes distributors and suppliers ever more vulnerable, and vulnerability is often equated with a higher payout, he explained. Also, “ransomware actors are increasingly thorough and use non-conventional methods to reach their targets,” said Moore. For example, using proper segmentation protocols, ransomware agents target IT management software systems and parent companies. Then, after breaching, they leverage this relationship to infiltrate the infrastructure of that organization’s subsidiaries and trusted partners. “Supply chain attacks are unfortunately common right now in part because there are higher stakes,” said Moore. “Extended supply chain disruptions have placed the industry at a fragile crossroads.” Supply chain attacks are low cost and can be minimal effort and have potential for high reward, said Crystal Morin, threat research engineer at Sysdig. And, tools and techniques are often readily shared online, as well as disclosed by security companies, who frequently post detailed findings.


Why User Journeys Are Critical to Application Detection

The first generation of cybersecurity detection technology is rules, but rules only detect known patterns. Individualized rules require expensive experts to maintain: each application is unique, and one must be extremely familiar with its business logic, log formats, how it is used, etc., in order to write and manage rules for detecting application breaches. ... Over a decade ago, the security market adopted statistical analysis to augment rule-based solutions in an attempt to provide more accurate detection for the infrastructure and access layers. However, UEBA failed to deliver as promised to dramatically increase accuracy and reduce false positive alerts due to a fundamentally mistaken assumption – that user behavior can be characterized by statistical quantities, such as the average daily number of activities. ... The main criteria for success in a detection solution is accuracy, which is dictated by the number of false positives, and the number of false negatives. The evolution of detection solutions led to the third generation of solutions analyzing Sequences of Activity, i.e. Journeys, to contextualize activity and improve detection accuracy.



Quote for the day:

"Before you revel in the anticipation of tomorrow, toil in the preparation of today." -- Tim Fargo