Daily Tech Digest - March 14, 2022

Is low-code safe and secure?

Think back to the early days of computing, when developers wrote their programs in assembly language or machine language. Developing in these low-level languages was difficult, and required highly experienced developers to accomplish the simplest tasks. Today, most software is developed using high-level programming languages, such as Java, Ruby, JavaScript, Python, and C++. Why? Because these high-level languages allow developers to write more powerful code more easily, and to focus on bigger problems without having to worry about the low-level intricacies of machine language programming. The arrival of high-level programming languages, as illustrated in Figure 1, enhanced machine and assembly language programming and generally allowed less code to accomplish more. This was seen as a huge improvement in the ability to bring bigger and better applications to fruition faster. Software development was still a highly specialized task, requiring highly specialized skills and techniques. But more people could learn these languages and the ranks of software developers grew.


The Real-World Advantages and Disadvantages of Low-Code Development Platforms

Low-code proponents point to what they claim is another distinct advantage: LDCP technologies help businesses do more with less. What is more, they promise to free skilled software engineers to focus on hard problems, on creative solutions, on what they (i.e., proponents) call “value-creating” work, as distinct to the types of recurrent, repeatable problems that MDSD and LDCP technologies aim to formalize and encapsulate in reusable applications and workflows. “We have four or five developers that … work in Mendix and they accomplish more than a team of, no lie, probably 15 to 20 developers,” Conway Solomon, CEO with Mendix customer WRSTBND, a company that provides event-management software and services, told Kavanagh. “So, what kind of cost savings is that? Especially as a small company that has a lot of ambitions, where you know, like, a lot of extra money has been [spent] on payroll, you can do it in a fraction of the cost and have the same outcome … if not better, and so we use that to our advantage.”


Meta’s Yann LeCun is betting on self-supervised learning to unlock human-compatible AI

The more popular branch of ML is supervised learning, in which models are trained on labeled examples. While supervised learning has been very successful at various applications, its requirement for annotation by an outside actor (mostly humans) has proven to be a bottleneck. First, supervised ML models require enormous human effort to label training examples. And second, supervised ML models can’t improve themselves because they need outside help to annotate new training examples. In contrast, self-supervised ML models learn by observing the world, discerning patterns, making predictions (and sometimes acting and making interventions), and updating their knowledge based on how their predictions match the outcomes they see in the world. It is like a supervised learning system that does its own data annotation. The self-supervised learning paradigm is much more attuned to the way humans and animals learn. We humans do a lot of supervised learning, but we earn most of our fundamental and commonsense skills through self-supervised learning.


How do we close the huge generation gap on flexible working?

There is a clear generational divide in where people want to work and how they see the purpose of the office. For young people, flexibility is key. They want to be in the office and connect and collaborate with co-workers face to face. That helps them onboard, form working relationships, receive guidance and soak up the company culture – all issues that workers have struggled with during the pandemic, a Microsoft and YouGov study highlighted in December. The office is often the vehicle for knowledge transfer between generations. But they also want to work from home when they need to – to look after a sick relative or wait for a repair engineer, for example. They don’t see this as a major issue, because their view isn’t based on traditional ways of working in an office every day. For them, the office space no longer stops at the office. They want to work where they want, when they want. And they want their bosses to provide the tools to help them do that. Last year, Microsoft’s Work Trend Index found that 42 percent of employees who worked from home lacked office essentials, and one in 10 didn’t have an adequate internet connection to do their job. 


How Do You Identify a Successful Scrum Master?

The Scrum team delivers a valuable Increment every single Sprint. As a framework, Scrum is focusing on delivery. Admittedly, this comes with many challenges. However, if a Scrum team is not regularly creating value for the (internal and external) stakeholders, everything else is of lesser importance. (A secondary positive effect of regularly delivering valuable Increments is building trust among stakeholders. Typically, building trust with them results in less supervision, for example, in the form of reporting duties or committees messing with Scrum—you get the idea. All of this is bolstering self-management, thus making working as a Scrum team more effective and enjoyable.) ... Other people want to join the Scrum team because nothing succeeds like success. (People voting with their feet is an excellent indicator for Scrum Master success, and it applies in both directions. My tip: Run regular, anonymous surveys in the Scrum team and ask whether team members would recommend an open position in the organization to a good friend with an agile mindset and track the development of this “employer NPS®” regularly to spot trends.)


Knox Wire Introduces an Eye-opening Network for Global Financial Settlements

The Knox Wire system was built by utilising world-class distributed ledger technology while further integrating artificial intelligence to facilitate its efficiency. It facilitates security, information authentication, and information storage on the network. It believes in the combined effort of its team to create the global settlement network through extensive experience in the development and finance sectors. Also, it holds professionalism throughout its interactions with institutions, hoping to revolutionise financial systems through innovation. The endgame is to benefit users, institutions, and eventually, governments. The onboarding process for financial institutions is straightforward, involving the beginning of an agreement with the platform. The institution will sign a contract with the settlement network and set all favourable employees by creating accounts on the Knox Wire system. Then, the network will provide AI integrations alongside its API parameters to support all the processes. 


Fintech Roundup: Due diligence makes a comeback and a former Better.com employee speaks out

We all knew – or at least some of us did, ahem – that this was likely not sustainable in the long term. Investors appeared to be backing some startups in part due to FOMO, and that’s not necessarily a good thing. So as the first quarter draws to a close, it’s clear that while in no way have fundraises come to a screeching halt, investors are starting to pump the brakes. Generally, it appears we are experiencing a market pullback – which Alex touches on in this piece – precipitated by a number of things, not the least of which – the conflict in Ukraine and disappointing performances by companies who went public in the last year. And fintech, last year’s rising star of venture, is not immune. My former colleague, Joanna Glasner, at Crunchbase News published a story on March 7 indicating that venture capitalists’ enthusiasm for fintech seems to be waning as of late. Her data point, according to Crunchbase data, was that in the two weeks leading up to her post, a total of 51 fintech companies across the globe collectively had raised $1.1 billion in seed through late-stage venture funding. 


Ensuring safety of digital communities with next-gen AI and proactive care

Safety would be easier to achieve if there was only one type of problematic behaviour online, but there are so many different categories in places you don’t expect. It’s become more difficult for consumers to protect their privacy when there’s so much software beyond the layperson’s understanding. Over a decade ago, a Cambridge Analytica-linked firm abused platforms to deceive people who held too much trust in what they saw online, swaying an election in Trinidad by encouraging people to abstain from voting, ultimately leading to the opposition party winning. It was made to look like a natural resistance movement, but it was engineered through corrupt practices. Coronavirus disinformation online has been a major battleground in the last few years. It’s hard to estimate how many lives have been potentially lost because people trusted unverified sources. The need for platforms to moderate user-generated content has never been more severe. Schwartz points to the importance of detecting issues early, saying, “If harmful online activity is left unchecked, its reach can grow rapidly and fester, exposing countless users to violent, extremist, or misleading content.”


Talent Shortage: Are Universities Delivering Well-Prepared IT Graduates?

Catherine Southard, vice president of engineering at D2iQ, says her company hasn’t had much success finding new grads with experience in Kubernetes and the Go programming language, in which D2iQ’s product is primarily developed. “Part of that is because the tech landscape changes so quickly. It would be great for a representative from tech companies -- maybe a panel of CTOs -- to sit down with curriculum developers every couple of years and talk through industry trends and where technology is headed, and then brainstorm how to bridge the gap between university and industry,” she says. Southard added something students can do is research jobs that look interesting, then see what tech stack those companies are using. They can then equip themselves to land those jobs by studying up on that technology by using free resources online or taking courses. She sees another area of improvement in support for internship programs. Historically, D2iQ had a program in the US, but it was expensive to operate, and it didn't lead to long-term employee retention, except for a couple of stand-out talents.


IT talent: Rethinking age in the hybrid work era

Never have an organization’s technical capabilities mattered less to its long-term differentiation and competitiveness. The rise of accessible, affordable outsourced vendors, SaaS platforms, and capabilities-on-demand means that most companies have the ability to acquire whatever leading-edge technologies and skill sets are needed at the moment. Leaders know the companies that win are the ones that get the most out of their people and teams. Resumes don’t tell us much about the skills that matter most in our current climate, and the computers we “hire” to read resume keywords tell us even less. These workers, having seen and been through countless configurations of teams, conflict, and trends have figured out how to focus on what makes a difference. We might learn more from them on how to spot and hire the unique capabilities that real people bring to real-people solutions in our workforce. ... As our work becomes physically less proximate, we need to find ways to seek out guidance – not just in classes and courses, but in real time, from our colleagues. 



Quote for the day:

"Your first and foremost job as a leader is to take charge of your own energy and then help to orchestrate the energy of those around you." -- Peter F. Drucker

Daily Tech Digest - March 13, 2022

3 leadership lessons from Log4Shell

APIs add to an organization’s attack surface, so it’s important to know where they are used. Gartner estimates that roughly 90% of web apps will soon have more of their exposed attack surface area accounted for by APIs as opposed to their own interfaces. Indeed, in 2021, malicious traffic around APIs grew by nearly 350%. Despite these trends, API use only continues to grow. Gone are the days of monolithic applications. Modern enterprise web applications are built with coupled services that communicate through APIs galore, and each component is a target for attackers if left unchecked. Pair that widened attack surface with the insane growth of APIs, and the need for strong API security is clear. Organizations need to cover their entire attack surface by implementing automated and accurate scans via user interfaces and APIs if they want to eliminate potential weak spots before they become problems. Put simply, security debt is an organization’s total inventory of unresolved security issues. These issues have a wide variety of sources, including knowledge gaps, inadequate tooling or cutting corners during testing in the race to market.


Increasing security for single page applications (SPAs)

First and foremost, the frontend code operates in an insecure environment: a user’s browser. SPAs often possess a refresh token that grants offline access to a user’s resources and can obtain new access tokens without interaction from the user. As these credentials are readable by the SPA, they are vulnerable to cross-site scripting (XSS) attacks, which can have dangerous repercussions such as attackers gaining access to users’ personal data and functionalities not normally accessible through the user interface. As the online data pool grows and hackers become more sophisticated, security must be taken seriously to protect customers’ information and businesses’ reputations. However, designing security solutions for SPAs is no easy feat. As well as the strongest browser security and simple and reliable code, software developers must consider how to deliver the best user experience – wrapping all this into a solution that can be deployed anywhere. The SPA’s web content can be deployed to many global locations via a Content Delivery Network (CDN). Web content is then close geographically to all users so that web downloads are faster.


AI and CSR can strengthen anti-corruption efforts

In addition to CSR, there has been much excitement about the future of AI in anti-corruption work. AI has increasingly become a part of our daily lives, from digital assistants like Siri and Alexa, to self-driving cars like Teslas and ride-hailing applications like Uber. Given that AI has been useful in so many ventures, anti-corruption scholars are eager to apply it to their work. In fact, AI has been described as “the next frontier in anti-corruption.” ... However, AI and anti-corruption discussions so far have mostly focused on governmental efforts to address corporate corruption, not on companies using AI to mitigate corporate corruption — even though many of them already use AI to maximize profit. In the corporate anti-corruption context, AI can provide companies with a proposed investment destinations or transactions and help detect corruption risks in such ventures and improve due diligence processes. AI can also provide more information for yearly anti-corruption policy reviews and assist in designing training based on AI analyses of company processes, reports and operations.


Data Mesh: The Balancing Act of Centralization and Decentralization

Another concept, which resonates well is data products. Managing and providing data as a product isn't the extreme of dumping raw data, which would require all consuming teams to perform repeatable work on data quality and compatibility issues. It also isn't the extreme of building an integration layer, using one (enterprise) canonical data model with strong conformation from all teams. Data product design is a nuanced approach of taking data from your (complex) operational and analytical systems and turning it into read-optimized versions for organizational-wide consumption. This approach of data product design comes with lots of best practices like aligning your data products with the language of your domain, setting clear interoperability standards for fast consumption, capturing it directly from the source of creation, addressing time-variant and non-volatile concerns, encapsulating metadata for security, ensuring discoverability, and so on. More of these best practices you can find here.


Role of the Metaverse, AI and digitalization — Are brands and consumers prepared for the new era?

The metaverse has a mostly positive impact on brands, but there are still some loopholes that worry them. For instance, the French champagne Armand de Brignac has recently filed trademark applications to register the appearance of its gold bottle packaging in virtual reality, augmented reality, video, social media and the web. Like this, many brands have established identities when it comes to product and packaging. Since this alternate reality is a fairly new territory to brands, it is difficult for them to gauge if a product or its packaging has distinctiveness outside the metaverse. Even if it does, it is unclear whether those rights will be sufficient to claim infringement inside the metaverse. Among other concerns, the metaverse also brings issues regarding privacy and security risks to light. Being an online-enabled space, it is uncertain whether consumers and brands may face new and unknown privacy and authenticity issues. The rise of the metaverse is just like that of the internet – former Amazon strategist Matthew Ball estimates that by 2027, every company will be a gaming company, implying that the metaverse will soon become a normal part of people’s lives.


Data Protection In The EU: New GDPR Right Of Access Guidelines

The right of access has a broad scope: in addition to basic personal data, according to the EDPB it also includes, for example, subjective notes made during a job application, a history of internet and search engine activity, etc. Unless explicitly stated otherwise, the request must be understood to relate to all personal data relating to the data subject, but the controller may ask the data subject to specify the request if it processes a large amount of data. This applies to each request: if a data subject makes more than one request, it would therefore not be sufficient to provide access only to the changes since the last request. Even data that may have been processed incorrectly or unlawfully should be provided. Data that has already been deleted, for example in accordance with a retention policy, and is therefore no longer available to the controller, does not need to be provided. Specifically, the controller will have to search all IT systems and other archives for personal data using search criteria that reflect the way the information is structured, for example, name and customer or employee number.


Even 'Perfect' APIs Can Be Abused

Even those organizations that do bring a proactive focus to application security tend to put more emphasis on protecting APIs created for web and mobile applications. In these cases, many organizations often incorrectly assume that their web application firewalls (WAFs) will bear much of the load of securing this type of API usage. But the biggest API protection gap intended — even in sophisticated organizations — is protection of APIs that are open to partners. These APIs are ripe for abuse. Even if they are perfectly written and have no vulnerabilities, they can be abused in unanticipated ways to expose the core business functions and data of the organizations that share them. Perhaps the best example of this is the Cambridge Analytica (CA) scandal that rocked Facebook in 2018. As a brief refresher, CA exploited Facebook's open API to gather extensive data about at least 87 million users. This was accomplished by using a Facebook quiz app that exploited a permissive setting that allowed third-party apps to collect information about the quiz-taker, as well as all of their friends' interests, location data, and more.


Five cloud security risks your business needs to address

“Misconfigurations remain a top risk for cloud applications and data,” says Paul Bischoff, privacy advocate and editor at Comparitech, a website that rates technologies on their cybersecurity. A misconfiguration happens when an IT team inadvertently leaves the door open for hackers by, say, failing to change a default security setting. This is often down to human error and/or a misunderstanding of how a firm’s systems operate and interact. If misconfigurations happen on a non-cloud-connected network, they’re self-contained and, potentially, accessible only to those in the physical workplace. But, once your data is in the cloud, “it is subject to someone else’s security. You do not have any direct control or ability to test it,” notes Steven Furnell, professor of cybersecurity at the University of Nottingham. “This means trusting another party’s measures, so look for the appropriate assurances from them rather than making assumptions.” 


8 technology trends for innovative leaders in a post-pandemic world

Leaders today are faced with the task of taking difficult decisions that can have a profound impact on their workforce and employee wellbeing (although it’s not all grim) in a very uncertain environment. New risks have also emerged with the staggering amount of data created on the internet, such as cyber-attacks that are increasingly frequent and costly. What our Young Global Leaders know well is that it’s easy to lead when times are going well, but real responsibility emerges when you must stand up for what you believe in. Responsible leaders truly shine in times of crisis. With this in mind, we asked eight Young Global Leaders how they will leverage technology and innovate to become better leaders in 2022. New computational and AI tools are already being used by business leaders to guide strategic decision-making. In the next decade, this software will become more powerful and will be applied in new and different settings. Built upon the mathematics of game theory, AI tools harness the computational innovations that power chess engines.


As cloud costs spiral upward, enterprises turn to a thing called FinOps

Enter FinOps. This practice is intended to help organizations get maximum business value from cloud "by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions," according to the FinOps Foundation. (Yes, there's now even an entire foundation devoted to the practice.) In many cases, they are practicing the art of FinOps without even calling it that. Respondents are actively involved in the ongoing usage and cost management for both SaaS (69%) and public cloud IaaS and PaaS (66%). "More and more users are swimming in the FinOps side of the pool, even if they may not know it -- or call it FinOps yet," the Flexera survey's authors state. In addition, for the sixth year in a row, "optimizing the existing use of cloud is the top initiative for all respondents, underscoring the need for FinOps teams or similar ways to improve cost savings initiatives," they also note. While the survey doesn't explicitly ask about FinOps adoption, the authors also state that some organizations have organized FinOps teams to assist in evaluating cloud computing metrics and value.



Quote for the day:

"The art of leadership is saying no, not yes. It is very easy to say yes." -- Tony Blair

Daily Tech Digest - March 12, 2022

The Similarities and Differences between ITIL 4 and VeriSM

Even though ITIL has been around for many years and is considered the de facto best practice framework for IT service management (ITSM), VeriSM emerged in 2018 to find its place in the market. And this came before the launch of ITIL 4 from AXELOS in February 2019. VeriSM’s publication introduced some modern approaches in service management such as Agile and shift-left among others. ITIL 4, once released, also incorporated these modern concepts that have conquered the IT world during the last few years. VeriSM claims not to be a body of service management best practice but is instead an approach where the key facet of the model (it’s not a process flow, nor a set of procedures) is the Management Mesh where all the popular management practices (ITIL, COBIT, ISO/IEC 20000, CMMI-SVC, DevOps, Agile, Lean, SIAM, etc.) and emerging technologies and trends (artificial intelligence (AI), containerization, the Internet of Things (IoT), big data, cloud, shift-left, continuous delivery, CX/UX, etc.) are included. Maybe there’s some truth in this statement. 


Solo.io Intros Gloo Mesh Enterprise 2.0

Introduced last year, Gloo Mesh Enterprise is an Istio-based Kubernetes-native solution for multicluster and multimesh service mesh management. New features in 2.0 such as multitenant workspaces enable users to set fine-grained access control and editing permissions based on roles for shared infrastructure, enabling teams to collaborate in large environments. Users can manage traffic, establish workspace dependencies, define cluster namespaces, and control destinations directly in the UI. And the policies can be re-used and adapted using labels. Gloo Mesh Enterprise 2.0 also features a new Gloo Mesh API for Istio management enables developers to configure rules and policies for both north-south traffic and east-west traffic from a single, unified API. The new API also simplifies the process of expanding from a single cluster to dozens or hundreds of clusters. And the new Gloo Mesh UI for observability provides service topology graphs that highlight network traffic, latency, and speeds while automatically saving the new state when you move clusters or nodes. 


Introducing Community Security Analytics

You can use CSA to further investigate high-fidelity security findings from Security Command Center (SCC) and correlate them with logs for decision-making. For example, you may use a CSA query to get the list of admin activity performed by a newly created service account key flagged by Security Command Center in order to validate any malicious activity. It’s important to note that the detection queries provided by CSA will be self-managed and you may need to tune to minimize alert noise. If you’re looking for managed and advanced detections, take a look at SCC Premium’s growing threat detection suite which provides a list of regularly-updated managed detectors designed to identify threats within your systems in near real-time. CSA is not meant to be a comprehensive, managed set of threat detections, but a collection of community-contributed sample analytics to give examples of essential detective controls, based on cloud techniques. Use CSA in conjunction with our threat detection and response capabilities in conjunction with our threat prevention capabilities.


µTransfer: A technique for hyperparameter tuning of enormous neural networks

Our theory of scaling enables a procedure to transfer training hyperparameters across model sizes. If, as discussed above, µP networks of different widths share similar training dynamics, they likely also share similar optimal hyperparameters. Consequently, we can simply apply the optimal hyperparameters of a small model directly onto a scaled-up version. We call this practical procedure µTransfer. If our hypothesis is correct, the training loss-hyperparameter curves for µP models of different widths would share a similar minimum. Conversely, our reasoning suggests that no scaling rule of initialization and learning rate other than µP can achieve the same result. This is supported by the animation below. Here, we vary the parameterization by interpolating the initialization scaling and the learning rate scaling between PyTorch default and µP. As shown, µP is the only parameterization that preserves the optimal learning rate across width, achieves the best performance for the model with width 213 = 8192, and where wider models always do better for a given learning rate—that is, graphically, the curves don’t intersect.


Will Transformers Take Over Artificial Intelligence?

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree. The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more. “Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich. Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence.


The Questionable Ethics Of Bitcoin ESG Junk Science

In February 2022, an op-ed, titled “Revisiting Bitcoin’s Carbon Footprint,” was published in the scientific journal “Joule,” authored by four researchers: Alex de Vries, Ulrich Gallersdörfer, Lena Klaaßen and Christian Stoll. Their written commentary, which admits limitations in their estimates, states that as bitcoin miners migrated from China to Kazakhstan and the United States in 2021, the network’s carbon footprint increased to 0.19% of global emissions. What went unnoticed by the media was that the researchers have professional motives to overstate Bitcoin’s relatively tiny environmental impact. The op-ed’s lead author, Alex de Vries, failed to disclose that he is employed by De Nederlandsche Bank (DNB), the Dutch central bank. Central banks are no fans of open, global payment rails, which bypass monopolistic government settlement layers. De Vries first released his “Bitcoin Energy Consumption Index” in November 2016, which coincides with his first round of employment with DNB, giving the appearance that DNB encouraged his critique of Bitcoin’s energy consumption. 


DBaaS and the Enterprise

From a DBA perspective (and being a former DBA myself), I always enjoyed working on more challenging issues. Mundane operations like launching servers and setting up backups make for a less-than-exciting daily work experience. When managing large fleets, these operations make up the majority of the work. As applications grow more complex and data sets grow rapidly, it is much more interesting to work with the application teams to design and optimize the data tier. Query tuning, schema design, and workflow analysis are much more interesting (and often beneficial) when compared to the basic setup. DBAs are often skilled at quickly identifying issues and understanding design issues before they become problems. When an enterprise adopts a DBaaS model, this can free up the DBAs to work on more complex problems. They are also able to better engage and understand the applications they are supporting. A common comment I get when discussing complex tickets with clients is: “well, I have no idea what the application is doing, but we have an issue with XYZ”.


How to Develop Strategies that Close the Leadership Gap with the Generation Gap

The leadership gap that has been forecasted for the past several years is upon us. And, it could not have come at a worse time with the Covid-19 pandemic still underway, impacting each of the multiple generations in the workforce differently. Many companies are unable to keep pace with their need to fill leadership openings created by Baby Boomers taking retirement and by companies expanding, in some cases at rapid rates. Their pipelines are not sufficient to fill the increasing number of leadership openings promptly. Companies that lack a focused strategy and drive to close this gap might very well find themselves struggling to stay in business and maintain their market share. The significant numbers of Baby Boomers taking retirement for the past ten years have only exacerbated the leadership gap. Many of them are leaving their leadership roles for their well-earned leisure lifestyle. In the third quarter of 2020, the number of Boomers who retired increased by over three million from the same quarter in 2019. 


How Digital Transformation is Rebuilding the Construction Industry

As construction companies continue to comply with pandemic restrictions, technology has been essential to the implementation of health and safety measures. For instance, firms can use wearables and AI sensors to detect when workers are not maintaining proper physical distance. Some construction projects are even using contact tracing devices that alert employees when there are too many personnel at a worksite; these can identify potentially infected individuals in the event of a confirmed COVID-19 case. These measures not only prioritize employee safety, but also help companies avoid entire site shutdowns. Even remotely, technology is a vital asset to construction firms. With fewer personnel allowed on-site, companies can rely on new cloud-based video platforms to assist with site monitoring. In the city of Miami, virtual inspections of construction sites through either a Zoom or a Microsoft Teams video call are now routine between engineers on site and building control officials. With usage tripling in 2020 alone, drones are also being used more frequently to improve mapping and surveying processes.


It’s not a Great Resignation–it’s a Great Rethink

Leaders often regard purpose in a limited way as either a marketing or human resources exercise. Companies that go deepest with purpose take a much more comprehensive approach, treating purpose as an operating system and embedding it in processes, organizational structures, and culture. Global professional services firm EY adopted a system of metrics to spur behaviors associated with its purpose. “Companies really have to be able to show what they’re doing,” EY’s CEO Carmine Di Sibio told me. “They get into trouble when they talk a lot about purpose and it’s just talk.” Imagine what it feels like when everything about your work ties back in clear, even obvious ways to your purpose. That’s what employees at deep-purpose companies experience on the job. It’s encouraging that some CEOs—68% of those queried in one survey—are placing “more emphasis” on purpose, but that’s not enough. For purpose to feel genuine and meaningful, they must live it in their daily work, hold others accountable for acting in ways congruent with that purpose, and bring it alive for their workforce.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - March 11, 2022

The secrets of successful cloud-first strategies

“Cloud-native is much more than just technology,” Rubina says. Companies need to take a fundamental shift in mindset away from traditional waterfall development toward more agile development principles such as the DevOps model, and automation. “Cloud-native must be a strategic approach; it must be driven by top management as it is a response to a wide range of business needs,” Rubina says. “And these need to be well defined and rolled out by senior management. It is about changes in the business model, about entering new markets, about the ability to adapt quickly to create innovative products and services and drastically reduce time to market.” ... “Determine if you’ll be using a fixed-cost structure flexible for the cloud,” Hon says. “Are you leveraging showback or chargeback to the business? And keep in mind seasonality. You want to have an idea of how often you scale up and shrink down and what that looks like. Building out cost models is key for how you can build a budget.” With a traditional data center, companies buy and install hardware with workload peaks in mind, Hon says. 


Moving a Legacy Data Warehouse to the Cloud? SAP’s Answer

As currently designed, SAP BW Bridge is primarily intended as a data acquisition and staging layer — which has been the mainstay for BW. Other capabilities, such as planning, are outside the scope of BW Bridge; instead, SAP directs BW customers to SAP Analytics Cloud (SAC) Planning, which for now is a separate offering; later this year, SAP will add support SAC planning on top of DWC based data models. The scenarios for SAP BW Bridge are, not surprisingly, centered around bringing SAP legacy data sources into the modern cloud data warehousing world. It could include those that want to replicate their existing BW environment and operational reports but in a managed cloud environment. And of course, it could involve migrating more modern BW/4HANA on-premises as well. And, as noted above, with cross-space sharing, BW data could be shared with other greenfield analytics, and mixed with data from other sources, developed in DWC. SAP BW Bridge is an acknowledgment that, for classic or legacy systems to make it into the cloud, cloud SaaS providers have to provide more flexibility to accommodate the types of customizations that permeate legacy systems.


Database technology evolves to combine machine learning and data storage

The new model offers not just the potential for tapping the power of AI algorithms, but also a more flexible search engine that isn’t locked into searching for exact matches. While traditional databases require the names to be spelled correctly or the exact confirmation code to locate a record, Weaviate can find entries that are the most similar. What does it mean to be similar? That’s still a wide open question for many users. Much of the art goes into defining how to calculate just how close or far apart two pieces of data might be. Finding the closest records in the database begins with finding a metric or a way to specify just what it means to be nearby in some multidimensional space defined by an AI. While SeMI Technologies is the main fundraiser, much of the branding is focused on Weaviate, the open source database. Companies can download the code or purchase Weaviate as a managed service. Many Weaviate users rely on pre-built models for text in English and other well-known languages. There’s one model built out of the entire collection of Wikipedia articles that SeMI built, so people could experiment. 


Using the Problem Reframing Method to Build Innovative Solutions

First, reframing is not analysis. It is not about finding the root cause analysis and asking “Why is this problem existing?” Reframing starts before that, when you ask “What problem are we trying to solve?” and “Is this the right problem to solve?” Reframing is about looking at the big picture and thinking of the problem from different angles. Second, reframing is not about finding the real problem but finding a better problem to solve. The advantages of reframing a problem are generating more options, opening the problem space (diverge) and in the end, building better solutions by solving a better problem. Let’s look into a simple example that Thomas Wedell-Wedellsborg explained in his book “What’s your problem?” Imagine you are the owner of an office building, and your tenants are complaining about the elevator. It’s too slow, and they have to wait a lot. Several tenants are threatening to break their leases if you don’t fix the problem. When asked, most people directly jump into thinking of solutions: install a new lift, upgrade the motor, or perhaps improve the algorithm that runs the lift. 


Code Verify: An open source browser extension for verifying code authenticity on the web

Code Verify expands on the concept of subresource integrity, a security feature that lets web browsers verify that the resources they fetch haven’t been manipulated. Subresource integrity applies only to single files, but Code Verify checks the resources on the entire webpage. To do this at scale, and to enhance trust in the process, Code Verify partners with Cloudflare to act as a trusted third party. We’ve given Cloudflare a cryptographic hash source of truth for WhatsApp Web’s JavaScript code. When someone uses Code Verify, the extension automatically compares the code that runs on WhatsApp Web against the version of the code verified by WhatsApp and published on Cloudflare. If there are any inconsistencies, Code Verify will notify the user. While comparing hashes to detect files that have been tampered with is not new, Code Verify does so automatically, with the help of Cloudflare’s third-party verification, and at this scale for the first time. WhatsApp’s security protections, the Code Verify extension, and Cloudflare all work together to provide real-time code verification. 


Chainlink for Enterprises: The Gateway to All Blockchains

Chainlink is secure, open-source blockchain middleware (referred to as an “oracle”) that provides smart contracts with any type of data or computation that they cannot inherently obtain on their native blockchain due to technical, financial, governance, or legal constraints. Unlike blockchains, which maintain internal consistency around transaction validation, Chainlink aims to generate and deliver oracle reports to blockchains that accurately reflect the state of external events and computation. Chainlink oracles are able to generate oracle reports because the Chainlink oracle node software can read data from and write data to blockchains and APIs and perform off-chain computation. Chainlink generates trust-minimization for oracle reports through mechanisms similar to those used by blockchains, such as decentralized validation, cryptographic signatures, and financial/reputational incentives outlined in service level agreements (SLAs). 


A decade of IoT: 10 years forward and 10 years back

As with many markets, the winners are often those that have specialisms, such as CSL’s critical connectivity or Peoplesafe’s personal safety solutions, or those that have scale, such as Wireless Logic. Businesses can achieve success in different ways but ultimately they need to provide a solution that the market needs, along with executing their vision efficiently on the back of strong market tailwinds. However, with the half-life of business models shortening, especially in fast-growth technology, great management teams must continue to evolve to ensure they maintain their competitive advantage. This evolution could mean adding new services, more security, or providing an improved service wrapper for the customer. Moreover, another key differentiator might be the route to market. For example, at ECI’s former investment, Arkessa, it understood there was a significant opportunity to introduce IoT connectivity at the point of manufacture rather than in the aftermarket. There has already been some consolidation of the market as companies aim to dominate certain verticals within the IoT market or scale-up.


Low-Code Tools Optimize Engineering Time for Internal Applications

A large part of application development in today’s world involves a lot of application management. Low-code tools are getting widespread adoption because, in a world where front-end applications are becoming more complex, they make things like hosting, deployment, authentication, and workflow functions much simpler. As compared to custom code, where you have to spin up a server and set up CI/CD pipelines to deploy to your cluster, low-code takes care of this with the click of a button, saving developers time because they don’t have to spend time configuring and maintaining a custom deployment. Similarly, authentication and authorization is a complex process to get right because there are different access controls to be thought of, invite flows need to be addressed, and onboarding and offboarding for team members need to be taken care of. Low-code products do the heavy lifting and have these features built in, allowing developers to define authentication and authorization in a much simpler manner.


IT leadership: 5 ways you could be wasting your team's talent

Today’s IT specialists no longer need a four-year degree in computer science or engineering to be able to do their job. Between the many boot camps and online training resources, it’s entirely possible for new developers to train themselves and gain the skills necessary to provide value to an IT organization. Limiting your talent search to those with outdated credentials could cause you to miss out on golden opportunities to build out your team. Instead, focus on skills using unbiased assessments that can determine proficiency without knowing a candidate’s educational background or past experience. Maximizing the potential of your IT employees provides a dual benefit: not only does it help your enterprise to avoid the crunch of a talent shortage, but it also ensures that your employees will be more satisfied with their jobs and career advancement. Tech employees highly value logic and efficiency, and by demonstrating that your organization makes the most out of its teams and technologies, you’ll gain another key selling point as you compete for new talent in a highly contested job market.


Cloud security is too important to leave to cloud providers

The need to take control of security and not turn ultimate responsibility over to cloud providers is taking hold among many enterprises, an industry survey suggests. The Cloud Security Alliance, which released its survey of 241 industry experts, identified an "Egregious 11" cloud security issues. The survey's authors point out that many of this year's most pressing issues put the onus of security on end user companies, versus relying on service providers. "We noticed a drop in ranking of traditional cloud security issues under the responsibility of cloud service providers. Concerns such as denial of service, shared technology vulnerabilities, and CSP data loss and system vulnerabilities -- which all featured in the previous 'Treacherous 12' -- were now rated so low they have been excluded in this report. These omissions suggest that traditional security issues under the responsibility of the CSP seem to be less of a concern. Instead, we're seeing more of a need to address security issues that are situated higher up the technology stack that are the result of senior management decisions."



Quote for the day:

"A lot of people have gone farther than they thought they could because someone else thought they could." -- Zig Zigler

Daily Tech Digest - March 10, 2022

Sharp rise in SMB cyberattacks by Russia and China

Over the last several weeks, there has been a sharp rise in activity from countries with consistently high levels of both attempted and successful attacks originating within their borders — Russia and China. The vast volumes of data analyzed suggests these countries may even be coordinating attack efforts. Per analysis available, attack trend lines that compare Russia and China show almost the exact same pattern. Juxtaposed to a chart from Germany indicates that it is not even close to the same pattern, leading to educated speculation that these countries could be coordinating efforts. According to the Brookings Institute, “The U.S. National Security Strategy declares Russia and China the two top threats to U.S. national security. At the best of times, U.S.-Russia ties are a mixture of cooperation and competition, but today they are largely adversarial… Russia’s increasingly close relationship with China represents an ongoing challenge for the United States. While there is little that Washington can do to draw Moscow away from Beijing, it should not pursue policies that drive the two countries closer together, such as the trade war with China and rafts of sanctions against Russia.”


Threat intelligence: why it matters, and what best practice looks like

While no two organisations are the same, one useful way to think about deploying threat intelligence is to focus on three stages: monitoring, integration and analysis. In the early days of a project threat intelligence strategy, it’s unlikely that you’ll have the relevant expertise, time, or resources that are necessary to support proactive intelligence analysis yet. However, by collecting information from various sources and monitoring them for threat indicators relevant to your business, it’s possible to drive significant value. This could include things like leaked corporate credentials, mentions of your product on the dark web or looking for typosquats of your corporate brands in domain name registrations that are important as you begin your journey. The intelligence gained from doing so could help to inform the IT department for password resets, phishing email campaigns targeting employees and accelerate efforts to verify potential security incident efforts. Next comes integration. 


A Proposal For Type Syntax in JavaScript

When we’ve been asked "when are types coming to JavaScript?", we’ve had to hesitate to answer. Historically, the problem was that if you asked developers what they had in mind for types in JavaScript, you’d get many different answers. Some felt that types should be totally ignored, while others felt like they should have some meaning – possibly that they should enforce some sort of runtime validation, or that they should be introspectable, or that they should act as hints to the engine for optimization, and more! But in the last few years we’ve seen people converge more towards a design that works well with the direction TypeScript has moved towards – that types are totally ignored and erasable syntax at runtime. This convergence, alongside the broad use of TypeScript, made us feel more confident when several JavaScript and TypeScript developers outside of our core team approached us once more about a proposal called "types as comments". The idea of this proposal is that JavaScript could carve out a set of syntax for types that engines would entirely ignore, but which tools like TypeScript, Flow, and others could use.


Have smart wearables increased productivity of employees in the hybrid working environment?

Smartwatches offer myriads of features that help individuals take charge of their daily tasks and complete them quicker and with ease. From using the voice commands to dictate emails to sending short messages or to track their physical movements, water intake, SpO2, heart rate, stress, breathing exercises, stretching, etc., these devices have enabled us to tirelessly complete tasks without compromising on fitness and health. SpO2 has emerged as an important measure for fitness over the last two years. It is satisfying to keep a check on it from time to time just in case any medical assistance is required. On the other hand, earbuds let you answer calls hands free, which makes it easier to make notes or go on with other tasks, thereby boosting productivity. Features like ANC and ENC take care of the background noise to further enhance the quality of audio experience. And in case, you’re out running an errand during office hours, and forget a crucial meeting that was scheduled, your smartwatch will notify you. You can also pick up the call via your earbuds while you drive back home, and it is really happening out there.


Best Practices for Running Stateful Applications on Kubernetes

A common approach is to run your stateful application in a VM or bare metal machine, and have resources in your Kubernetes cluster communicate with it. The stateful application becomes an external integration from the perspective of pods in your cluster. The upside of this approach is that it allows you to run existing stateful applications as is, with no refactoring or re-architecture. If the application is able to scale up to meet the workloads required by the Kubernetes cluster, you do not need Kubernetes’ fancy auto scaling and provisioning mechanisms. The downside is that by maintaining a non-Kubernetes resource outside your cluster, you need to have a way of monitoring processes, performing configuration management, performing load balancing and service discovery for that application. ... A second, equally common approach is to run stateful applications as a managed cloud service. For example, if you need to run a SQL database with a containerized application, and you are running in AWS, you can use Amazon’s Relational Database Service (RDS). 


3 DevSecOps Practices to Minimize Impact of the Next Log4Shell

Security is tough to get right, and it’s made more difficult by market pressures, cloud complexity and the growing prevalence of open source libraries. This has expanded the typical enterprise’s cyberattack surface to many times its size of several years ago. It has also provided more opportunities for potentially critical vulnerabilities to enter the development cycle and then persist into production. Log4Shell is the poster child for that problem. As a result, it’s more important than ever that we pay more than lip service to the concept of security as a shared responsibility within the organization. “Shared responsibility” is often used to mean greater boardroom buy-in, or in the context of behavioral change among staff, but it’s just as important in IT departments. We need developers to become more skilled in building secure products, but we also need to ensure apps in production continue running securely. Breaking down the silos between developers, operations and security teams will drive true DevSecOps practices. To get there, organizations should unify teams around a centralized platform that gives them visibility and control.


Forrester predicts RPA software market growth will begin to flatten next year

Forrester is predicting that some of the money going to RPA software today will begin to shift to broader AI automation solutions. It’s worth noting that while RPA has robotic in its name, it’s not really AI in a true sense. The bots in this case are more like scripts completing a set of highly manual tasks. By comparison, no-code automation solutions make it easy to create a workflow, presumably without consulting help. AI provides a way to intelligently implement tasks and take steps based on the data instead of moving through a set of highly defined hard-coded work. This decline is coming in spite of investor enthusiasm for the market from investors who valued UiPath at $35 billion when it raised $750 million last year, its last private fundraise prior to its IPO. Today the company’s market cap sits at close to $15 billion, certainly a precipitous drop in value, even taking into consideration the big hit software companies have been taking in the stock market over the last year. Meanwhile, we also saw some pretty significant consolidation as companies like SAP bought Signavio, ServiceNow acquired Intellibot and Salesforce snagged Servicetrace, as several examples.


The rise of confidential blockchains

Cryptoeconomics has long been founded upon the proof-of-work consensus algorithm. This algorithm has proven to be truly resilient to Byzantine attacks. But there are downsides. First, the performance of proof-of-work blockchains remains poor. Bitcoin, for example, still operates at seven transactions per second. Second, proof-of-work blockchains are also extremely energy-intensive. Today, the process of creating Bitcoin consumes around 91 terawatt-hours of electricity annually. This is more energy than is used by Finland, a nation of about 5.5 million people. While, there is a section of commentators that consider this to be a necessary cost of protecting the global cryptocurrency system, rather than just the cost of running a digital payment system. There is another section that thinks that this cost could be done away with by developing proof-of-stake consensus protocols, as they deliver much higher throughput of transactions. Indeed, the proof-of-stake blockchains built on the Tendermint framework deliver upwards of 10,000 transactions per second. However, proof-of-stake blockchains also have some downsides.


Teaming is hard because you’re probably not really on a team

Real teams are all about solving the hardest, most complex problems. A diverse set of perspectives and skills is required to untangle these sorts of problems, for which there is no obvious solution. Members of a real team trust each other and work toward a common goal. Real teams are thoughtful, they argue, and they push each other to do better. They require nimble leaders who prioritize building connections within the team. They create clear boundaries that reinforce a strong sense of trust. They have a shared purpose and clear norms. And, importantly, they produce a collective output. If you see a group of people focusing intently on solving a single, very complex problem, you’re probably looking at a real team. Working groups are all about efficiency. Most people spend most of their productive time in working groups. We’ll say it again: there is nothing wrong with being in a working group. In fact, working groups are often best suited to the tasks at hand. Managers of working groups focus heavily on techniques to make their collaboration more efficient. 


How machine learning can course-correct inherent biases in recruiting

Often, if the job opening is attractive, there may be hundreds of people applying for a single position. Toward the end of the hiring process, all of the candidates are more than good enough to do the job but they don’t make the final cut. How hiring managers decide between them is often on minute mistakes. These are an underutilised resource for HR teams when recruiting. These candidates have already proven themselves, but historically there hasn’t been an easy way to match them with other companies who would likely hire them based on their performance. Joonko has developed a platform that is made up entirely of silver medalists, pre-qualified candidates who have passed at least two stages of the recruiting process, and match these candidates with future jobs, thus saving significant time in the recruiting process. ... “Silver medalists were already vetted by their peers, and the conversation with the candidates could be more around the specific needs of the organisation, without the excruciating part of the interview process.”



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - March 09, 2022

Small Biz Takes Digital Highway

Data protection and cybersecurity issues will also take centre stage once small and medium businesses adopt digitisation at a larger scale. He says cyber attacks and complying with laws and policies will require companies to build mechanisms which entail considerable, if not hefty, costs. In fact, small and medium businesses seem to be already bearing the brunt of cyber attacks. A study by Cisco published in September 2021 that sampled about 1,014 local businesses showed that about 74% small and medium businesses had faced a cyber incident in past 12 months. “At the end of the day, digital is here to stay, and nobody can ignore that. I am sure the service segment will evolve to offer solutions at affordable prices,” says Subbiah. Costs, after all, are a big factor for smaller companies, though companies are more than willing to spend on adding technology capabilities due to high return on investment. ... The bulk of the investment goes into cloud, automation and modern infrastructure, say analysts. “Specifically, within cloud, SaaS adoption is seeing acceleration as it entails lower costs and entry barriers,” says Abhinav Johri, director and practice head, digital consulting, EY.


2.5 million-plus cybersecurity jobs are open—women can fill them

Encouraging and nurturing the careers of women in cybersecurity is important for a number of reasons: Our cyber adversaries come from diverse backgrounds, which means that our defender community must be equally diverse in order to understand and succeed against them. We are facing a massive talent shortage in cybersecurity of more than 2.5 million job openings. This is putting a strain on security teams and organizations of every size. We can vastly decrease the deficit by deliberately expanding our hiring and mentorship of underrepresented groups who can bring so much to the table. Innovation is everything! And what’s more conducive to innovation than bringing together new perspectives, ideas, and experiences to solve today’s challenges? Cybersecurity depends on it because cybercrime tactics keep evolving. In fact, an MIT Technology Review article referred to cybersecurity versus cybercrime as “an innovation war.” Studies show that diversity of thought and leadership is just good for business.


IoT comes of age

Cities are near and dear to my heart as a former municipal CIO [chief information officer]. One of the challenges that we’ve seen in a number of large cities around the world is the amount of traffic congestion in the center of cities. A number of different cities have applied congestion pricing. They are tracking when vehicles are in the center of the city and charging for the times when congestion is highest. That doesn’t necessarily make the driver happy, but we have seen material changes in traffic patterns within those cities that have invested in congestion pricing. ... What we saw happen all too often was IoT being treated as a technology project, often run by the CIO or by a small business unit or factory plant all by themselves. And so the technology has changed, but the actual way of work has not. When we look at some of the lighthouse factories that Michael referenced earlier from the World Economic Forum, we see that they treat the integration of IoT as a holistic operating model transformation. When they look at how systems and processes are going to change on the factory floor, for example, they think about how they may need to motivate individuals working within that system differently. 


Critical flaws in remote management agent impacts thousands of medical devices

Forescout has identified over 150 potentially vulnerable devices using Axeda from over 100 different manufacturers. Over half of the devices are used in healthcare, specifically lab equipment, surgical equipment, infusion, radiotherapy, imaging and more. Others were found in the financial services, retail, manufacturing and other industries and include ATMs, vending machines, cash management systems, label printers, barcode scanning systems, SCADA systems, asset monitoring and tracking solutions, IoT gateways and machines such as industrial cutters. The seven vulnerabilities, which Forescout has dubbed Access:7 include three critical ones that can result in remote code execution. One vulnerability (CVE-2022-25251) stems from unauthenticated commands present in the Axeda xGate.exe agent that allow an attacker to retrieve information about a device and change the agent's configuration. By changing the configuration, an attacker could point the agent to a server they control and hijack the functionality.


How to approach cloud compliance monitoring

One common strategy is to use the data collected by cloud and network monitoring tools to create a centralized view of compliance status across all these domains. This approach aligns well with current cloud and network monitoring practices. To start a cloud compliance monitoring strategy, divide the tasks identified above. Some are design-time considerations. Here, an application will meet or fall short of compliance standards based on how developers build it. Others are run-time considerations, meaning the application requires surveillance during operations to validate compliance. The specific tools and procedures an organization applies to its cloud applications depend on how compliance requirements map to these categories. Enforce design-time compliance standards into the development pipeline, and validate them through logging and version monitoring. The former requires a systematic way to initiate, execute, review, test and deploy cloud software. Teams must identify tools that enforce and document the requirements of each applicable standard. 


Ukraine Fighting First-Ever 'Hybrid War' - Cyber Official

Ukraine continues to fight not just on the ground and in the air, but also online. "This is happening for the first time in history and I believe that cyber war can only be ended with the end of conventional war, and we will do everything we can to bring this moment closer," SSSCIP's Zhora said at a Friday press conference, the BBC reported. Zhora said Ukrainian cyber defenders continue to repel attacks on the country's online services and infrastructure, and said that "they are not afraid of Russian" attacks focused on such critical infrastructure as power plants or nuclear facilities, the BBC reported. Internet access remains shaky across Ukraine, due in part to continued bombing, says Britain's Ministry of Defense. "Ukrainian internet access is … highly likely being disrupted as a result of collateral damage from Russian strikes on infrastructure," it says. "Over the past week, internet outages have been reported in Mariupol, Sumy, Kyiv and Kharkiv." ... "Russia is probably targeting Ukraine's communications infrastructure in order to reduce Ukrainian citizens' access to reliable news and information," it adds.


7 reasons to embrace Web3 — and 7 reasons not to

Just because Bitcoin wastes so much energy doesn’t mean that Web3 will need to do the same. There are many protocols that offer some genuine assurance of correctness without requiring a bazillion transistors to be constantly solving some mathematical puzzle. Proof of stake, for example, is a neutral, decentralized protocol. It may not be perfect, but maybe we can get by with an adequate consensus model for a number of parts of Web3? Many people might be just as happy with blockchain managed by a coalition of trusted parties. It may not be theoretically free of domination, but if the coalition is big enough and the process is open, it could be embraced at a much lower cost in energy, silicon, and time. ... Our society is increasingly driven by data. Anything we can do to increase the accuracy of the data will help everyone who uses the information to make decisions. One of the side effects of adding more robust digital signatures and protocols to every interaction means that there will be more structure. ... Web3 is bound to have more accurate information and that will lift every part of the web that depends upon it.


The Uncertain Future of IT Automation

As Automox predicted at the end of last year, IT and security transformation continue as organizations everywhere try to find a new normal following the disruptions of the pandemic, and IT automation will have to adjust. This has been challenging for many organizations — and more importantly, people, as discussed above — but there are silver linings too. The pandemic has pushed new innovation across many areas, with exciting new tools and practices on the horizon for IT and security teams. One innovation that is particularly interesting is cybersecurity mesh architectures. Gartner has claimed that “organizations adopting a cybersecurity mesh architecture will reduce the financial impact of security incidents by an average of 90 percent” by 2024. A cybersecurity mesh architecture leverages various parts of the enterprise to integrate widely distributed, disparate security services. This is key to managing and accounting for a workforce that has never been more remote and globally distributed.


Predicting the future of AI and analytics in endpoint security

What’s troubling about Unit 42’s findings for endpoints is that 40% of enterprises are still using spreadsheets to track digital certificates manually, and 57% of enterprises don’t have an accurate inventory of SSH keys. These two factors contribute to the widening gap in endpoint security that bad actors are highly skilled at exploiting. It’s common to find organizations that aren’t tracking up to 40% of their endpoints, according to a recent interview with Jim Wachhaus, attack surface protection evangelist at CyCognito. Jim told VentureBeat that it’s common to find organizations generating thousands of unknown endpoints a year. Supporting Jim’s findings are CISOs who tell VentureBeat that keeping track of every endpoint defies what can be done through manually-based processes today as their IT staffs are already stretched thin. Add to that how CIOs and CISOs are battling a chronic labor shortage as their best employees are offered 40% or more of their base salary and up to $10,000 signing bonuses to jump to a new company, and the severity of the situation becomes clear. In addition, 56% of executives say their cybersecurity analysts are overwhelmed, according to BCG.


New attack bypasses hardware defenses for Spectre flaw in Intel and ARM CPUs

To mitigate the risk, software vendors such as Google and the Linux kernel developers came up with software-based solutions such as retpoline. While these were effective, they introduced a significant performance hit, so CPU vendors later developed hardware-based defenses. Intel's is called EIBRS and ARM's is called CSV2. "These solutions are complex to learn more about them—but the gist of them is that the predictor 'somehow' keeps track of the privilege level (user/kernel) in which a target is executed," the VUSec researchers explain. "And, as you may expect, if the target belongs to a lower privilege level, kernel execution won’t use it." The problem, however, is that the CPU's predictor relies on a global history to select the target entries to speculatively execute and, as the VUSec researchers proved, this global history can be poisoned. In other words, while the original Spectre v2 allowed attackers to actually inject target code locations and then trick the kernel to execute that code, the new Spectre-BHI/BHB attack can only force the kernel to mispredict and execute interesting code gadgets or snippets that already exist in the history and were executed in the past, but which might leak data.



Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox