Daily Tech Digest - March 29, 2022

How Platform Ops Teams Should Think About API Strategy

Rules and policies that control how APIs can connect with third parties and internally are a critical foundation of modern apps. At a high level, connectivity policies dictate the terms of engagement between APIs and their consumers. At a more granular level, Platform Ops teams need to ensure that APIs can meet service-level agreements and respond to requests quickly across a distributed environment. At the same time, connectivity overlaps with security: API connectivity rules are essential to ensure that data isn’t lost or leaked, business logic is not abused and brute-force account takeover attacks cannot target APIs. This is the domain of the API gateway. Unfortunately, most API gateways are designed primarily for north-south traffic. East-west traffic policies and rules are equally critical because in modern cloud native applications, there’s actually far more east-west traffic among internal APIs and microservices than north-south traffic to and from external customers.


What will it take to stop fraud in the metaverse?

While some fraud in the metaverse can be expected to resemble the scams and tricks of our ‘real-world’ society, other types of fraud must be quickly understood if they are to be mitigated by metaverse makers. When Facebook’s Metaverse first launched, investors rushed to pour billions of dollars into buying acres of land. The so-called ‘virtual real estate’ sparked a land boom which saw $501 million in sales in 2021. This year, that figure is expected to grow to $1 billion. Selling land in the metaverse works like this: pieces of code are partitioned to create individual ‘plots’ within certain metaverse platforms. These are then made available to purchase as NFTs on the blockchain. While we might have laughed when one buyer paid hundreds of thousands of dollars to be Snoop Dogg’s neighbour in the metaverse, this is no laughing matter when it comes to security. Money spent in the metaverse is real, and fraudsters are out to steal it. One of the dangers of the metaverse is that, while the virtual land and property aren’t real, their monetary value is. On purchase, they become real assets linked to your account. Therefore, fraud doesn’t look like it used to.


How IoT data is changing legacy industries – and the world around us

Massive, unstructured IoT data workloads — typically stored at the edge or on-premise — require infrastructure that not only handles big data inflows, but directs traffic to ensure that data gets where it needs to be without disruption or downtime. This is no easy feat when it comes to data sets in the petabyte and exabyte range, but this is the essential challenge: prioritizing the real-time activation of data at scale. By building a foundation that optimizes the capture, migration, and usage of IoT data, these companies can unlock new business models and revenue streams that fundamentally alter their effects on the world around us. ... As legacy companies start to embrace their IoT data, cloud service providers should take notice. Cloud adoption, long understood to be a priority among businesses looking to better understand their consumers, will become increasingly central to the transformation of traditional companies. The cloud and the services delivered around it will serve as a highway for manufacturers or utilities to move, activate, and monetize exabytes of data that are critical to businesses across industries. 


The security gaps that can be exposed by cybersecurity asset management

There is a plethora of tools being used to secure assets, including desktops, laptops, servers, virtual machines, smartphones, and cloud instances. But despite this, companies can struggle to identify which of their assets are missing the relevant endpoint protection platform/endpoint detection and response (EPP/EDR) agent defined by their security policy. They may have the correct agent but fail to understand why its functionality has been disabled, or they are using out-of-date versions of the agent. The importance of understanding which assets are missing the proper security tool coverage and which are missing the tools’ functionality cannot be underestimated. If a company invests in security and then suffers a malware attack because it has failed to deploy the endpoint agent, it is a waste of valuable resources. Agent health and cyber hygiene depends on knowing which assets are not protected, and this can be challenging. The admin console of an EPP/EDR can provide information about which assets have had the agent installed, but it does not necessarily prove that the agent is performing as it should.


Google AI and UC Berkely Researchers Introduce A Deep Learning Approach Called ‘PRIME’

PRIME develops a robust prediction model that isn’t easily tricked by adversarial cases to overcome this restriction. To architect simulators, this model is simply optimized using any standard optimizer. More crucially, unlike previous methods, PRIME can learn what not to construct by utilizing existing datasets of infeasible accelerators. This is accomplished by supplementing the learned model’s supervised training with extra loss terms that particularly punish the learned model’s value on infeasible accelerator designs and adversarial cases during training. This method is similar to adversarial training. One of the main advantages of a data-driven approach is that it enables learning highly expressive and generalist optimization objective models that generalize across target applications. Furthermore, these models have the potential to be effective for new applications for which a designer has never attempted to optimize accelerators. The trained model was altered to be conditioned on a context vector that identifies a certain neural net application desire to accelerate to train PRIME to generalize to unseen applications.


Use zero trust to fight network technical debt

In a ZT environment, the network not only doesn’t trust a node new to it, but it also doesn’t trust nodes that are already communicating across it. When a node is first seen by a ZT network, the network will require that the node go through some form of authentication and authorization check. Does it have a valid certificate to prove its identity? Is it allowed to be connected where it is based on that identity? Is it running valid software versions, defensive tools, etc.? It must clear that hurdle before being allowed to communicate across the network. In addition, the ZT network does not assume that a trust relationship is permanent or context free: Once it is on the network, a node must be authenticated and authorized for every network operation it attempts. After all, it may have been compromised between one operation and the next, or it may have begun acting aberrantly and had its authorizations stripped in the preceding moments, or the user on that machine may have been fired.


IT professionals wary of government campaign to limit end-to-end encryption

Many industry experts said they were worried about the possibility of increased surveillance from governments, police and the technology companies that run the online platforms. Other concerns were around the protection of financial data from hackers if end-to-end encryption was undermined. There were concerns that wider sharing of “secret keys”, or centralised management of encryption processes, would significantly increase the risk of compromising the confidentiality they are meant to preserve. BCS’s Mitchell said: “It’s odd that so much focus has been on a magical backdoor when other investigative tools aren’t being talked about. Alternatives should be looked at before limiting the basic security that underpins everyone’s privacy and global free speech.” Government and intelligence officials are advocating, among other ways of monitoring encrypted material, technology known as client-side scanning (CSS) that is capable of analysing text messages on phone handsets and computers before they are sent by the user.


Hypernet Labs Scales Identity Verification and NFT Minting

A majority of popular NFT projects so far have been focused on profile pictures and art projects, where early adopters have shown a willingness to jump through hoops and bear the burden of high transaction fees on the Ethereum Network. There’s growing enthusiasm for NFTs that serve more utilitarian purposes, like unlocking bonus content for subscription services or as a unique token to allow access to experiences and events. With the release of Hypernet.Mint, Hypernet Labs is taking the same approach toward simplifying the user experience that it applied to Hypernet.ID. Hypernet.Mint offers lower-cost deployment by leveraging Layer 2 blockchains like Polygon and Avalanche that don’t have the same high fee structure as the Ethereum mainnet. The company also helps dApps create a minting strategy that aligns with business goals, supporting either mass minting or minting that is based on user onboarding flows that may acquire additional users over time. “We’re working on a lot of onboarding flow for new types of users, which comes back to ease of use for users,” Ravlich said.


How decision intelligence is helping organisations drive value from collected data

While AI can be a somewhat nebulous concept, decision intelligence is more concrete. That’s because DI is outcome-focused: a decision intelligence solution must deliver a tangible return on investment before it can be classified as DI. A model for better stock management that gathers dust on a data scientist’s computer isn’t DI. A fully productionised model that enables a warehouse team to navigate the pick face efficiently and decisively, saving time and capital expense — that’s decision intelligence. Since DI is outcome focused, it requires models to be built with an objective in mind and so addresses many of the pain points for businesses that are currently struggling to quantify value from their AI strategy. By working backwards from an objective, businesses can build needed solutions and unlock value from AI quicker. ... Global companies, including Pepsico, KFC and ASOS have already emerged as early adopters of DI, using it to increase profitability and sustainability, reduce capital requirements, and optimise business operations.


Insights into the Emerging Prevalence of Software Vulnerabilities

Software quality is not always an indicator of secure software. A measure of secure software is the number of vulnerabilities uncovered during testing and after production deployment. Software vulnerabilities are a sub-category of software bugs that threat actors often exploit to gain unauthorized access or perform unauthorized actions on a computer system. Authorized users also exploit software vulnerabilities, sometimes with malicious intent, targeting one or more vulnerabilities known to exist on an unpatched system. These users can also unintentionally exploit software vulnerabilities by inputting data that is not validated correctly, subsequently compromising its integrity and the reliability of those functions that use the data. Vulnerability exploits target one or more of the three security pillars; Confidentiality, Integrity, or Availability, commonly referred to as the CIA Triad. Confidentiality entails protecting data from unauthorized disclosure; Integrity entails protecting data from unauthorized modification and facilitates data authenticy.



Quote for the day:

"To be a good leader, you don't have to know what you're doing; you just have to act like you know what you're doing." -- Jordan Carl Curtis

Daily Tech Digest - March 28, 2022

Scientists Work To Turn Noise on Quantum Computers to Their Advantage

“We know very little about quantum computers and noise, but we know really well how this molecule behaves when excited,” said Hu. “So we use quantum computers, which we don’t know much about, to mimic a molecule which we are familiar with, and we see how it behaves. With those familiar patterns we can draw some understanding.” This operation gives a more ‘bird’s-eye’ view of the noise that quantum computers simulate, said Scott Smart, a Ph.D. student at the University of Chicago and first author on the paper. The authors hope this information can help researchers as they think about how to design new ways to correct for noise. It could even suggest ways that noise could be useful, Mazziotti said. For example, if you’re trying to simulate a quantum system such as a molecule in the real world, you know it will be experiencing noise—because noise exists in the real world. Under the previous approach, you use computational power to add a simulation of that noise. “But instead of building noise in as additional operation on a quantum computer, maybe we could actually use the noise intrinsic to a quantum computer to mimic the noise in a quantum problem that is difficult to solve on a conventional computer,” Mazziotti said.


How to Bring Shadow Kubernetes IT into the Light

Running container-based applications in production goes well beyond Kubernetes. For example, IT operations teams often require additional services for tracing, logs, storage, security and networking. They may also require different management tools for Kubernetes distribution and compute instances across public clouds, on-premises, hybrid architectures or at the edge. Integrating these tools and services for a specific Kubernetes cluster requires that each tool or service is configured according to that cluster’s use case. The requirements and budgets for each cluster are likely to vary significantly, meaning that updating or creating a new cluster configuration will differ based on the cluster and the environment. As Kubernetes adoption matures and expands, there will be a direct conflict between admins, who want to lessen the growing complexity of cluster management, and application teams, who seek to tailor Kubernetes infrastructure to meet their specific needs. What magnifies these challenges even further is the pressure of meeting internal project deadlines — and the perceived need to use more cloud-based services to get the work done on time and within budget.


Managing the complexity of cloud strategies

Both polycloud and sky computing are strategies for managing the complexities of a multicloud deployment. Which model is better? Polycloud is best at leveraging the strengths of each individual cloud provider. Because each cloud provider is chosen based on its strength in a particular cloud specialty, you get the best of each provider in your applications. This also encourages a deeper integration with the cloud tools and capabilities that each provider offers. Deeper integration means better cloud utilization, and more efficient applications. Polycloud comes at a cost, however. The organization as a whole, and each development and operations person within the organization, need deeper knowledge about each cloud provider that is in use. Because an application uses specialized services from multiple providers, the application developers need to understand the tools and capabilities of all of the cloud providers. Sky computing relieves this knowledge burden on application developers. Most developers in the organization need to know and understand only the sky API and the associated tooling and processes.


US, EU Agree to a New Data-Sharing Framework

The Biden administration and the European Commission said in a joint statement issued on Friday that the new framework "marks an unprecedented commitment on the U.S. side to implement reforms that will strengthen the privacy and civil liberties protections applicable to U.S. signals intelligence activities." Signals intelligence involves the interception of electronic signals/systems used by foreign targets. In the new framework, the U.S. reportedly will apply new "safeguards" to ensure signals surveillance activities "are necessary and proportionate in the pursuit of defined national security objectives," the statement says. It also will establish a two-level "independent redress mechanism" with binding authority, which it said will "direct remedial measures, and enhance rigorous and layered oversight of signals intelligence activities." The efforts, the statement says, places limitations on surveillance. Officials said the framework reflects more than a year of negotiations between U.S. Secretary of Commerce Gina Raimondo and EU Commissioner for Justice Didier Reynders.


Google's tightening key security on Android with a longer (but better) chain of trust

There's a software key stored on basically every Android phone, inside a secure element and separated from your own data — separately from Android itself, even. The bits required for that key are provided by the device manufacturer when the phone is made, signed by a root key that's provided by Google. In more practical terms, apps that need to do something sensitive can prove that the bundled secure hardware environment can be trusted, and this is the basis on which a larger chain of love trust can be built, allowing things like biometric data, user data, and secure operations of all kind to be stored or transmitted safely. Previously, Android devices that wanted to enjoy this process needed to have that key securely installed at the factory, but Google is changing from in-factory private key provisioning to in-factory public key extraction with over-the-air certificate provisioning, paired with short-lived certificates. As even the description makes it sound, this new change is a more complicated system, but it fixes a lot of issues in practice. 


How Do I Demonstrate the ROI of My Security Program?

The first is to change the perception of security’s role as the “office of NO.” Security programs need to embrace that their role is to ENABLE the business to take RISKS, and not to eliminate risks. For example, if a company needs to set up operations in a high-risk country, with risky cyber laws or operators, the knee jerk reaction of most security teams is to say “no.” In reality, the job of the security team is to enable the company to take that risk by building sound security programs that can identify, detect, and respond to cybersecurity threats. When company leaders see security teams trying to help them achieve their business goals, they are better able to see the value of a strong cybersecurity program. Similarly, cybersecurity teams must understand their company’s business goals and align security initiatives accordingly. Too many security teams try to push their security initiatives as priorities for the business, when, in fact, those initiatives may be business negatives.


Extended Threat Intelligence: A new approach to old school threat intelligence

One of the challenges of being a security leader is making the most informed decision to choose from a diverse pool of technologies to prevent data breaches. As the trend of consolidation in cybersecurity is accelerating, solutions that provide similar results but are listed under different market definitions make the job harder. Meanwhile, security practitioners grapple with a multitude of technologies that generate alerts from various vendors, eventually causing loss of productivity and complexity. The importance of the integration of artificial intelligence with the cyber security sector should be underlined at this point. A smart combination of AI-powered automation technology and a CTIA team can increase productivity while turning a large alert stream into a massive number of events. ... Digital Risk Protection (DRPS) and Cyber Threat Intelligence (CTI) take to the stage of course. Again, to give an example by using auto-discovered digital assets including brand keywords, unified DRPS and CTI technology start collecting and analyzing data across the surface, deep, and dark web to be processed and analyzed in real-time.


Large-Scale, Available Graphene Supercapacitors; How Close are We?

One issue with supercapacitors so far has been their low energy density. Batteries, on the other hand, have been widely used in consumer electronics. However, after a few charge/discharge cycles, they wear out and have safety issues, such as overheating and explosions. Hence, scientists started working on coupling supercapacitors and batteries as hybrid energy storage systems. For example, Prof. Roland Fischer and a team of researchers from the Technical University Munich have recently developed a highly efficient graphene hybrid supercapacitor. It consists of graphene as the electrostatic electrode and metal-organic framework (MOF) as the electrochemical electrode. The device can deliver a power density of up to 16 kW/kg and an energy density of up to 73 Wh/kg, comparable to several commercial devices such as Pb-acid batteries and nickel metal hydride batteries. Moreover, the standard batteries (such as lithium) have a useful life of around 5000 cycles. However, this new hybrid graphene supercapacitor retains 88% of its capacity even after 10,000 cycles.


3 reasons user experience matters to your digital transformation strategy

Simply put, a strong UX makes it easier for people to follow the rules. You can “best practice” employees all day long, but if those practices get in the way of day-to-day responsibilities, what’s the point of having them? Security should be baked into all systems from the get-go, not treated as an afterthought. And when it’s working well, people shouldn’t even know it’s there. Don’t make signing into different systems so complicated or time-consuming that people resort to keeping a list of passwords next to their computer. Automating security measures as much as possible is the surest way to stay protected while putting UX at the forefront. By doing this, people will have access to the systems they need and be prohibited from those that they don’t for the duration of their employment – not a minute longer or shorter. Automation also enables organizations to understand what is normal vs. anomalous behavior so they can spot problems before they get worse. For business leaders who really want to move the needle, UX should be just as important as CX. Employees may not be as vocal as customers about what needs improvement, but it’s critical information.


Automation Is No Silver Bullet: 3 Keys for Scaling Success

Many organizations think automation is an easy way to enter the market. Although it’s a starting point, automated testing warrants prioritization. Automated testing doesn’t just speed up QA processes, but also speeds up internal processes. Maintenance is also an area that benefits from automation with intelligent suggestions and searches. Ongoing feedback needs to improve user expectations. It’s a must-have for agile continuous integration and continuous delivery cycles. Plus, adopting automated testing ensures more confidence in releases and lower risks of failures. That means less stress and happier times for developers. That is increasingly important given the current shortage of developers amid the great reshuffle. Automated testing can help fight burnout and sustain a team of developers who make beautiful and high-quality applications. Some of the benefits of test automation include the reduction of bugs and security in final products, which increases the value of software delivered.



Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold

Daily Tech Digest - March 27, 2022

Chasing The Myth: Why Achieving Artificial General Intelligence May Be A Pipe Dream

Often, people confuse AGI with AI, which is loosely used nowadays by marketers and businesses to describe run-of-the-mill machine learning applications and even normal automation tools. In simple words, Artificial General Intelligence involves an ever-growing umbrella of abilities of machines to perform various tasks significantly better than even the brightest of human minds. An example of this could be AI accurately predicting stock market trends to allow investors to rake in profits consistently. Additionally, AGI-based tools can interact with humans conversationally and casually. In recent times, domotic applications such as smart speakers, smart kitchens and smartphones are gradually becoming more interactive as they can be controlled with voice commands. Additionally, advanced, updated versions of such applications show distinctly human traits such as humor, empathy and friendliness. However, such applications just stop short of having genuinely authentic interactions with humans. The prospective future arrival of AGI, if it happens, will plug this gap.


JavaScript Framework Unpoly and the HTML Over-the-Wire Trend

JavaScript is the most popular programming language in the world, and React is one of its leading libraries. Initially released in 2013, React was designed to be a library for helping developers craft user interfaces (UIs). According to Henning Koch, React and Unpoly aren’t entirely opposites. They share some likenesses, but there are a few important distinctions. “What both frameworks share is that they render a full page when the user navigates, but then only fragments of that new page are inserted into the DOM, with the rest being discarded,” he explained. “However, while a React app would usually call a JSON API over the network and render HTML in the browser, Unpoly renders HTML on the server, where we have synchronous access to our data and free choice of programming language.” Still, Koch acknowledges there are some instances where React and SPA’s are suitable choices. He went on to say, “There are still some cases where a SPA approach shines. For instance, we recently built a live chat where messages needed to be end-to-end encrypted.


Researchers make a quantum storage breakthrough by storing a qubit for 20 milliseconds

The new 20-millisecond milestone, however, could be just the breakthrough Afzelius' team was looking for. "This is a world record for a quantum memory based on a solid-state system, in this case a crystal. We have even managed to reach the 100 millisecond mark with a small loss of fidelity," Azfelius said. For their experiments, the researchers kept their crystals at temperatures of -273,15°C so as not to disturb the effect of entanglement. "We applied a small magnetic field of one thousandth of a Tesla to the crystal and used dynamic decoupling methods, which consist in sending intense radio frequencies to the crystal," said Antonio Ortu, a post-doctoral fellow in the Department of Applied Physics at UNIGE. "The effect of these techniques is to decouple the rare-earth ions from perturbations of the environment and increase the storage performance we have known until now by almost a factor of 40," he added. The result of this experiment could allow for the development of long-distance quantum telecommunications networks, though the researchers would still have to extend the storage time further.


3 Tips to Take Advantage of the Future Web 3.0 Decentralized Infrastructure

There's been a lot of talk of innovation and helping the little guy through blockchain. But, huge resources and backing are needed in order to sustain a project and take it mainstream on a longer time horizon. Even with a brilliant technical team, excellent developers and a well-thought-out whitepaper and tokenomics ecosystem, the project won’t go anywhere. Unless, it's marketed on major outlets and pushed towards consumers consistently. This is an attention-based economy and it takes effort to capture mainstream attention and to keep it. Moreover, it will take a great deal of finance to develop VR technology that is high-quality, integrated into the many metaverses, cost-effective and marketed well. A small team might be able to conjure up a good initial project. But, they will likely need to partner up or hand off the project so it can become mainstream. Always assess who a project is affiliated with and what partnerships they have. This is a strong indication of how much they value their own project and also offers numerous other benefits for various scenarios.


A diffractive neural network that can be flexibly programmed

In initial evaluations, the diffractive neural network introduced by this team of researchers achieved very promising results, as it was found to be highly flexible and applicable across a wide range of scenarios. In the future, it could thus be used to solve a variety of real-world problems, including image classification, wave sensing and wireless communication coding/decoding. Meanwhile, Cui and his colleagues will work on improving its performance further. "The prototype implemented in this work is based on a 5-layer diffractive neural network, each layer has 64 programmable neural networks, and the total number of nodes in the network is relatively low," Cui added. "At the same time, the operating frequency band of this network is lower, resulting in a larger size of the physical network. In our next studies, we plan to further increase the scale of the programmable neurons of the network, improve the network integration, reduce the size and form a set of intelligent computers with stronger computing power and more practicality for sensing and communications."


Microsoft Azure Developers Awash in PII-Stealing npm Packages

In this case, the cyberattackers were pretending to offer a key set of existing, legitimate packages for Azure. “It became apparent that this was a targeted attack against the entire @azure npm scope, by an attacker that employed an automatic script to create accounts and upload malicious packages that cover the entirety of that scope,” researchers said in a Wednesday posting. “The attacker simply creates a new (malicious) package with the same name as an existing @azure scope package, but drops the scope name.” Npm scopes are a way of grouping related packages together. JFrog found that besides the @azure scope, other popular package groups were also targeted, including @azure-rest, @azure-tests, @azure-tools and @cadl-lang. The researchers added, “The attacker is relying on the fact that some developers may erroneously omit the @azure prefix when installing a package. For example, running npm install core-tracing by mistake, instead of the correct command – npm install @azure/core-tracing.” The attacker also tried to hide the fact that all of the malicious packages were uploaded by the same author, “by creating a unique user per each malicious package uploaded,” according to JFrog.


An Introduction to Mathematical Thinking for Data Science

Mathematical thinking is closely tied to what many mathematicians call mathematical maturity. In the words of UC Berkeley professor Anant Sahai, mathematical maturity refers to “comfort in solving problems step by step and maintaining confidence in your work even as you take steps forward.” In most mathematical problems, the solution is not immediately clear. A mathematically mature person finds it reasonable — and even satisfying — to make incremental progress and eventually reach a solution, even if they have no idea what it might be when they first begin. ... the ability to think mathematically will give you deeper insight into the intricacies of the data and how the problem is actually being solved. You might find yourself logically rearranging a program to make it more efficient, or recommending a specific data collection technique to obtain a sample which matches existing statistical methods. In doing so, you’ll expand your repertoire and thus contribute to a better data science workflow.


Schwinger effect seen in graphene

In theory, a vacuum is devoid of matter. In the presence of strong electric or magnetic fields, however, this void can break down, causing elementary particles to spring into existence. Usually, this breakdown only occurs during intense astrophysical events, but researchers at the UK’s National Graphene Institute at the University of Manchester have now brought it into tabletop territory for the first time, observing this so-called Schwinger effect in a device based on graphene superlattices. The work will be important for developing electronic devices based on graphene and other two-dimensional quantum materials. In graphene, which is a two-dimensional sheet of carbon atoms, a vacuum exists at the point (in momentum space) where the material’s conduction and valence electron bands meet and no intrinsic charge carriers are present. Working with colleagues in Spain, the US, Japan and elsewhere in the UK, the Manchester team led by Andre Geim identified a signature of the Schwinger effect at this Dirac point, observing pairs of electrons and holes created out of the vacuum.


The risk of undermanaged open source software

Some risks are the same regardless of whether solutions are built with vendor-curated or upstream software; however it is the responsibility for maintenance and security of the code that changes. Let’s make some assumptions about a typical organization. That organization is able to identify where all of its open source comes from, and 85% of that is from a major vendor it works with regularly. The other 15% consists of offerings not available from the vendor of choice and comes directly from upstream projects. For the 85% that comes from a vendor, any security concerns, security metadata, announcements and, most importantly, security patches, come from that vendor. In this scenario, the organization has one place to get all of the needed security information and updates. The organization doesn’t have to monitor the upstream code for any newly discovered vulnerabilities and, essentially, only needs to monitor the vendor and apply any patches it provides.


Confessions of a Low-Code Convert

A lot of programmers hear “low-code tools” and get twitchy. But the reality, especially if you are building processes versus tools, is that low-code solutions don’t prevent me from being creative or effective; they enable it. They handle tedious, labor-intensive boilerplate items and free me up to write the lines of JavaScript I actually need to uniquely express a business problem. And there are still plenty of places where you need to (and get to!) write that clever bit of code to implement a unique business requirement. It’s much easier to fix or refactor an app written by a low-code citizen developer in the line of business than it is to decipher whatever madness they’ve slapped together in their massive, mission-critical Excel spreadsheet. I find low-code platforms incredibly sanity-saving. They reduce noise in the system and obviate a lot of the admittedly unexciting elements of my work. The technology landscape has changed dramatically. Cloud adoption has introduced a world of serverless containerization.



Quote for the day:

"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney

Daily Tech Digest - March 22, 2022

When did Data Science Become Synonymous with Machine Learning?

Many folks just getting started with data science have an illusory idea of the field as a breeding ground where state-of-the-art machine learning algorithms are produced day after day, hour after hour, second after second. While it is true that getting to push out cool machine learning models is part of the work, it’s far from the only thing you’ll be doing as a data scientist.In reality, data science involves quite a bit of not-so-shiny grunt work to even make the available data corpus suitable for analysis. According to a Twitter poll conducted in 2019 by data scientist Vicki Boykis, fewer than 5% of respondents claimed to spend the majority of their time on ML models [1]. The largest percentage of data scientists said that most of their time was spent cleaning up the data to make it usable. ... Data science is a burgeoning field, and reducing it down to one concept is a misrepresentation which is at best false, and at worse dangerous. To excel in the field as a whole, it’s necessary to remove the pop-culture tunnel vision that seems to only notice machine learning. 


NaaS adoption will thrive despite migration challenges

The pandemic has also played a significant role in spurring NaaS adoption, Chambers says. "During the early days of COVID-19 there was a rapid push for users to be able to connect quickly, reliably, and securely from anywhere at any time," he says. "This required many companies to make hardware/software purchases and rapid implementations that accelerated an already noticeable increase in overall network complexity over the last several years." Unfortunately, many organizations faced serious challenges while trying to keep pace with suddenly essential changes. "Companies that need to quickly scale up or down their network infrastructure capabilities, or those that are on the cusp of major IT infrastructure lifecycle activity, have become prime NaaS-adoption candidates," Chambers says. It’s easiest for organizations to adopt small-scale NaaS offerings to gain an understanding of how to evaluate potential risk and rewards and determine overall alignment to their organization’s requirements.


Securing DevOps amid digital transformation

The process of requesting a certificate from a CA, receiving it, manually binding it to an endpoint, and self-managing it can be slow and lack visibility. Sometimes, DevOps teams avoid established quality practices by using less secure means of cryptography or issuing their own certificates from a self-created non-compliant PKI environment – putting their organizations at risk. However, PKI certificates from certified and accredited globally trusted CAs offer the best way for engineers to ensure security, identity and compliance of their containers and the code stored within them. A certificate management platform, which is built to scale and manages large volumes of PKI certificates, is perfect for the DevOps ethos and their environments. Organizations can now automate the request and installation of compliant certificates within continuous integration/continuous deployment (CI/CD) pipelines and applications to secure DevOps practices and support digital transformation. Outsourcing your PKI to a CA means developers have a single source to turn to for all certificate needs and are free to focus on core competencies. 


Reprogramming banking infrastructure to deliver innovation at speed

Fintech firms typically apply digital technology to processes those legacy institutions find difficult, time consuming, or costly to undertake, and they often focus on getting a single use case like payments, or alternative lending right. In contrast, neo banks, or challenger banks, deliver their services primarily through phone apps that often aim to do many things that a bank can do, including lending money and accepting deposits. A key advantage for both is that they don’t have to spend time, money, and organisational capital to transform into something new. They were born digital. Likewise, they both claim convenience as their prime value proposition. However, while customers want convenience, many still see banking as a high-touch service. If their bank has survived decades of consolidation and has served a family for generations, familiarity can be a bigger draw than convenience. That said, the COVID-19 pandemic has accelerated the online trend. More and more of us auto-pay our bills and buy our goods as well as our entertainment and services via e-commerce. 


No free lunch theorem in Quantum Computing

The no free lunch theorem entails that a machine learning algorithm’s average performance is dependent on the amount of data it has. “Industry-built quantum computers of modest size are now publicly accessible over the cloud. This raises the intriguing possibility of quantum-assisted machine learning, a paradigm that researchers suspect could be more powerful than traditional machine learning. Various architectures for quantum neural networks (QNNs) have been proposed and implemented. Some important results for quantum learning theory have already been obtained, particularly regarding the trainability and expressibility of QNNs for variational quantum algorithms. However, the scalability of QNNs (to scales that are classically inaccessible) remains an interesting open question,” the authors write. This also suggests a possibility that in order to model a quantum system, the amount of training data might also need to grow exponentially. This threatens to eliminate the edge quantum computing has over edge computing. The authors have discovered a method to eliminate the potential overhead via a newfound quantum version of the no free lunch theorem.


IT Talent Shortage: How to Put AI Scouting Systems to Work

The most likely people to leave a company are highly skilled employees who are in high demand (e.g., IT). Employees who feel they are underutilized and who want to advance their careers, and employees who are looking for work that can more easily balance with their personal lives, are also more likely to leave. It’s also common knowledge that IT employees change jobs often, and that IT departments don't do a great job retaining them for the long haul. HR AI can help prevent attrition if you provide it with internal employee and departmental data that it can assess your employees, their talents and their needs based upon the search criteria that you give it. For instance, you can build a corporate employee database that goes beyond IT, and that lists all the relevant skills and work experiences that employees across a broad spectrum of the company possess. Using this method, you might identify an employee who is working in accounting, but who has an IT background, enjoys data analytics, and wants to explore a career change. Or you could identify a junior member of IT who is a strong communicator and can connect with end users in the business. 


Automation and digital transformation: 3 ways they go together

All sorts of automation gets devised and implemented for specific purposes or tasks, sometimes for refreshingly simple reasons, like “Automating this makes our system more resilient, and automating this makes my job better.” This is the type of step-by-step automation long done by sysadmins and other operations-focused IT pros; it’s also common in DevOps and site reliability engineering (SRE) roles. IT automation happens for perfectly good reasons on its own, and it has now spread deep and wide in most, if not all, of the traditional branches of the IT family tree: development, operations, security, testing/QA, data management and analytics – you get the idea. None of this needs to be tethered to a digital transformation initiative; the benefits of a finely tuned CI/CD pipeline or security automation can be both the means and the end. There’s no such thing as digital transformation without automation, however. This claim may involve some slight exaggeration, and reasonable people can disagree. But digital transformation of the ambitious sort that most Fortune 500 boardrooms are now deeply invested in requires (among other things) a massive technology lever to accomplish, and that lever is automation.


The best way to lead in uncertain times may be to throw out the playbook

Organizations also used the sensing-responding-adapting model to combat misinformation and confusion about masks and vaccines. With conflicting guidance from the Centers for Disease Control and Prevention (CDC) in the US and the World Health Organization (WHO), one organization we studied opted for “full transparency” with a “fully digital” solution. The company built an app that included data from sources the company considered reliable, and it updated policies, outlined precautions, and offered ways to report vaccination status. The app turbocharged the company’s sense-respond-adapt capabilities by getting quality information in everyone’s hands and opening a new channel for regular two-way communication. There was no waiting for an “all hands” meeting to get meaningful questions and feedback. Reflecting on the results of the study, one takeaway became clear: it’s worthwhile for leaders of any team to absorb the lessons of sense-respond-adapt, even if there is no emergency at hand. Here are three ways to employ each step of the model.


The Three Building Blocks of Data Science

Data is worthless without the context for understanding it properly — context which can only be obtained by a domain expert: someone who understands the field where the data stems from and can thus provide the perspectives needed to interpret it correctly. Let’s consider a toy example to illustrate this. Imagine we collect data from a bunch of different golf games from recent years of the PGA Tour. We obtain all the data, we process and organize it, we analyze it, and we confidently publish our findings, having triple-checked all our formulas and computations. And then, we become laughingstocks of the media. Why? Well, since none of us has ever actually played golf, we didn’t realize that lower scores correspond to a better performance. As a result, all our analyses were based on the reverse, and therefore incorrect. This is obviously an exaggeration, but it gets the point across. Data only makes sense in context, and so it is essential to consult with a domain expert before attempting to draw any conclusions.


Surprise! The metaverse could be great news for the enterprise edge

Metaverse latency control is more than just edge computing, it’s also edge connectivity, meaning consumer broadband. Faster broadband offers lower latency, but there’s more to latency control than just speed. You need to minimize the handling, the number of hops or devices between the user who’s pushing an avatar around a metaverse, and the software that understands what that means to what the user “sees” and what others see as well. Think fiber and cable TV, and a fast path between the user and the nearest edge, which is likely to be in a nearby major metro area. And think “everywhere” because, while the metaverse may be nowhere in a strict reality sense, it’s everywhere that social-media humans are, which is everywhere. Low latency, high-speed, universal consumer broadband? All the potential ad revenue for the metaverse is suddenly targeting that goal. As it’s achieved, the average social-media junkie could well end up with 50 or 100 Mbps or even a gigabit of low-latency bandwidth. There are corporate headquarters who don’t have it that good. 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - March 21, 2022

Improve agile and app dev meetings

Sometimes, it’s the choice of tools for hybrid work that can simplify remote collaboration. Sometimes it’s how organizations, teams, and people use them. ... These basic tools help agile teams manage their priorities, requirements, and status to complete sprints and releases. There are also opportunities to improve collaboration with product owners and stakeholders using advanced road maps and sharing views of Jira issues on Confluence pages. Another option is to reduce the complexity in developing applications, dashboards, and data integrations with low-code and no-code tools. These tools can cut the time and collaboration required to prototype, develop, test, and deploy capabilities, and their visual programming modalities often lessen the need to create detailed implementation documents. Rosaria Silipo, PhD, principal data scientist and head of evangelism at KNIME, agrees and says, “Low-code tools are becoming increasingly popular and deliver the freedom to prototype swiftly. They enable the collaboration of simple to complex app dev within an integrated platform, where steps are consumable by technical and non-technical stakeholders.”


Is UX design regressing today ?

Internet users are confronted all day long with various sites, each with its logic, rules, and UX design. There is a need for flexibility on the part of users as they adapt throughout their day to applications that feel they have achieved the perfect logic for a good user experience. Every company has a website, a page on all the major social networks, an application. SAAS are multiplying and smartphones are more and more used to doing everything. This need for the digital presence of all companies has made the need for Ux designers explode. As Ux has become something commonplace, non-experts have expectations of Ux designers, the expectation of designing an application that pleases. The core of the problem lies in this level of design, an application is not used in isolation. It is linked to dozens of others and is part of a life where digital is always present. If some people feel that Ux design is regressing, it’s because of the lack of consideration for the ecosystem in which the applications will evolve. All the rules of Ux design can be perfectly applied, but will still create friction if the logic of use has been thought only for the application and not for the ecosystem.


Documenting the NFT voyage: A journey into the future

The most crucial task for any NFT project is to focus on innovative design and diversified utilities for its users. Moreover, the first-to-market NFT project will always have the edge over other competing projects to generate value. Unfortunately, while making copies of the original (forks) is easy, it does not always translate into a successful project. For example, the legendary Ethereum-based CryptoPunks from Larva Labs is the inspiration behind PolygonPunks residing on the Polygon blockchain. Although PolygonPunks is very successful, many consider it a ‘derivative collection’ that can compromise buyers’ safety. This is why the NFT marketplace OpenSea delisted PolygonPunks after a request from developers at Larva Labs. The second characteristic of a good NFT project is how strong the community is. A genuinely decentralized project with a well-knit community goes a long way in making it a success. As demonstrated above, the Pudgy Penguins and CryptoPunks communities are robust enough to protect the legacy of the projects. Moreover, interoperable NFTs help forge communities across blockchain networks, making them stronger.


“DevOps is a culture, it's not a job description”

In contrast to traditional software development lines, whereby those in product would define the product, pass it to the developers, who would send it to the testers, who would then assess its quality before sending it out for wider use, the ‘Dev-Centric’ culture at Wix advocates that the developer should remain in the middle of that process; it turns the assembly line into a circle with the engineer sitting comfortably within the compounds of all the other departments - the movie star in his own film in charge of filming and the final edit. “DevOps is a culture, it's not a job description… the DevOps culture, it’s kind of intertwined with continuous delivery. It is the culture of giving the developers the responsibility and ability to deploy their product end to end… DevOps is not a job description and I didn't want the people here in the company to confuse the two. It is a very similar concept of empowering the developers to run things on production.” Mordo, who joined Wix in 2010, has seen its growth from a simple website builder into one of the internet’s biggest players and Israel’s largest companies. 


Developer sabotages own npm module prompting open-source supply chain security questions

"Even if the deliberate and dangerous act of maintainer RIAEvangelist will be perceived by some as a legitimate act of protest, how does that reflect on the maintainer’s future reputation and stake in the developer community?," Liran Tal, Snyk's director of developer advocacy, said. "Would this maintainer ever be trusted again to not follow up on future acts in such or even more aggressive actions for any projects they participate in?" "When it comes to this particular issue of trust, I believe the best way for it to be handled is with proper software supply chain hygiene," Brian Fox, CTO of supply chain security firm Sonatype, tells CSO. "When you’re choosing what open-source projects to use, you need to look at the maintainers." Fox recommends exclusively choosing code from projects backed by foundations such as the Apache Foundation, which don't have projects with just one developer or maintainer. With foundations there is some oversight, group reviews and governance that's more likely to catch this type of abuse before it's released to the world.


Never-Mind the Gap: It Isn't Skills We're Short Of, It's Common Sense

Every person working in cybersecurity today started somewhere, and the amount of learning material currently available surpasses what was around when many of us started out. Enticing the right person to one of these outlets can spark a flame that can burn through an organization faster than anything else. When you ignite a passion, you ignite something deeper, and aiding these individuals in manifesting their talent can only benefit your organization. There needs to be a new narrative that cybersecurity is not only about having technical prowess because many roles don’t require a high level of technical expertise. These positions are a great stepping stone into the industry for those who lack the core technological know-how you might expect when you think of a “cybersecurity expert” and provide valuable insights and input to the security teams. Organizations love silos, but what happens when larger strategies overlap silos, technologies and outcomes?


Explore 9 essential elements of network security

Advanced network threat prevention products perform signatureless malware discovery at the network layer to detect cyber threats and attacks that employ advanced malware and persistent remote access. These products employ heuristics, code analysis, statistical analysis, emulation and machine learning to flag and sandbox suspicious files. Sandboxing -- the isolation of a file from the network so it can execute without affecting other resources -- helps identify malware based on its behavior rather than through fingerprinting. ... DDoS mitigation is a set of hardening techniques, processes and tools that enable a network, information system or IT environment to resist or mitigate the effect of DDoS attacks on networks. DDoS mitigation activities typically require analysis of the underlying system, network or environment for known and unknown security vulnerabilities targeted in a DDoS attack. This also requires identification of what normal conditions are -- through traffic analysis -- and the ability to identify incoming traffic to separate human traffic from humanlike bots and hijacked web browsers.


Preparing for the quantum-safe encryption future

Quantum-safe encryption is key to addressing the quantum-based cybersecurity threats of the future, and Woodward predicts that a NIST candidate will eventually emerge as the new standard used to protect virtually all communications flowing over the internet, including browsers using TLS. “Google has already tried experiments with this using a scheme called New Hope in Chrome,” he says. Post-Quantum’s own encryption algorithm, NTS-KEM (now known as Classic McEliece), is the only remaining finalist in the code-based NIST competition. “Many have waited for NIST’s standard to emerge before taking action on quantum encryption, but the reality now is that this could be closer than people think, and the latest indication is that it could be in the next month,” says Cheng. Very soon, companies will need to start upgrading their cryptographic infrastructure to integrate these new algorithms, which could take over a decade, he says. “Microsoft’s Brian LaMacchia, one of the most respected cryptographers in the world, has summarized succinctly that quantum migration will be a much bigger challenge than past Windows updates.”


The value of DevEx: how starting with developers can boost customer experience

The benefits of building a great customer experience are clear, but when identifying how to actually go about curating a world-class customer experience, things become more complicated. Many start by looking at end-user features and technologies such as chatbots, conversational AI, omnichannel messaging, and more as a way to kickstart CX efforts. Yet while all of these can, and should improve customer experience, they are not addressing customer experience at its core. The reality is, in order to truly build a transformational customer experience, you must first start with providing a better experience for those who are responsible for building your products, services, and the experiences customers have when interacting with them. You must start with your developers. Developer experience is customer experience. ... Creating a great developer experience means creating a frictionless developer experience. If developers can spend less time figuring out tools, processes, and procedures, they can spend more time innovating and building modern features and experiences for their end-users.


Why machine identities matter (and how to use them)

It is well accepted that reliance on perimeter network security, shared accounts, or static credentials such as passwords, are anti-patterns. Instead of relying on shared accounts, modern human-to-machine access is now performed using human identities via SSO. Instead of relying on network perimeter, a zero-trust approach is preferred. These innovations have not yet made their way into the world of machine-to-machine communication. Machines continue to rely on the static credentials – an equivalent of a password called the API key. Machines often rely on perimeter security as well, with microservices connecting to databases without encryption, authentication, authorization, or audit. There is an emerging consensus that password-based authentication and authorization for humans is woefully inadequate to secure our critical digital infrastructure. As a result, organizations are increasingly implementing “passwordless” solutions for their employees that rely on integration with SSO providers and leverage popular, secure, and widely available hardware-based solutions like Apple Touch ID and Face ID for access.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance." -- Thom S. Rainer

Daily Tech Digest - March 20, 2022

Can Open Source Sustain Itself without Losing Its Soul?

It’s clear that businesses will need to play more of a role in open source. As Valsorda noted in the same blog post, “open source sustainability and supply chain security are on everyone’s slide decks, blogs, and press releases. Big companies desperately need the open source ecosystem to professionalize.” Amanda Brock, CEO of OpenUK, a not-for-profit that supports the use of open technologies, concurred: “We need to know not only that we have the best software that can be produced — which collaborative and diverse globally produced open source software is — but also that appropriate funding has been provided to ensure that those building all this essential software are able to maintain and support it being secure.” Brock cited a number of examples of where this is happening in the U.K.; for example, she pointed to the work of the Energy Digitalisation Taskforce. That governmental group “suggested that the spine of the digitalized energy sector should be built on open source software. The National Health Service in the U.K. also now has an open source software-first approach for code it creates that it is increasingly trying to live by.”


Using the Business Model Canvas in Enterprise Architecture

The Business Model Canvas brings together nine key elements of a business model, making it possible to observe and describe the relationships of those nine elements to each other. As architects, plotting the relationship of one element to other elements is familiar territory. We align patterns, find gaps, map gives and gets, and understand strategy by assessing the relationships of the critical systems in an architecture landscape. The Business Model Canvas is yet another tool to help us convey understanding. Many enterprise architects hang their hats on “People, Process, Technology,” the popular PPT framework popularized in the 1990s. The roots of PPT extend further back, to the 1960s and the Diamond Model from Harold Leavitt. PPT and the Diamond Model are useful, for certain, but the canvas offers something that every enterprise architect should value. In the aggregate, the nine blocks tell the story of the organization, how it goes to market and aims to create, deliver, and capture value...”


How Enterprise Architecture Helps Reduce IT Costs

Easier said than done with the traditional process of manual follow-ups hampered by inconsistent documentation often scattered across many teams. The issue with documentation also often means that maintenance efforts are duplicated, resources that could have been better deployed elsewhere. The result is the equivalent of around 3 hours of a dedicated employee’s focus per application per year spent on documentation, governance, and maintenance. Not so for the organization that has a digital-native EA platform that leverages your data to enable scalability and automation in workflows and messaging so you can reach out to the most relevant people in your organization when it's most needed. Features like these can save an immense amount of time otherwise spent identifying the right people to talk to and when to reach out to them, making your enterprise architecture the single source of truth and a solid foundation for effective governance. The result is a reduction of approximately a third of the time usually needed to achieve this. 


AI drug algorithms can be flipped to invent bioweapons

Now consider an AI algorithm that can generate deadly biochemicals that behave like VX but are made up of entirely non-regulated compounds. "We didn't do this but it is quite possible for someone to take one of these models and use it as an input to the generative model, and now say 'I want something that is toxic', 'I want something that does not use the current precursors on the watch list'. And it generates something that's in that range. We didn't want to go that extra step. But there's no logical reason why you couldn't do that," Urbina added. If it's not possible to achieve this, you're back to square one. As veteran drug chemist Derek Lowe put it: "I'm not all that worried about new nerve agents ... I'm not sure that anyone needs to deploy a new compound in order to wreak havoc – they can save themselves a lot of trouble by just making Sarin or VX, God help us." There is no strict regulation on the machine-learning-powered synthesis of new chemical molecules. 


A Primer on Proxies

In HTTP/2, each request and response is sent on a different stream. To support this, HTTP/2 defines frames that contain the stream identifier that they are associated with. Requests and responses are composed of HEADERS and DATA frames which contain HTTP header fields and HTTP content, respectively. Frames can be large. When they are sent on the wire they might span multiple TLS records or TCP segments. Side note: the HTTP WG has been working on a new revision of the document that defines HTTP semantics that are common to all HTTP versions. The terms message, header fields, and content all come from this description. HTTP/2 concurrency allows applications to read and write multiple objects at different rates, which can improve HTTP application performance, such as web browsing. HTTP/1.1 traditionally dealt with this concurrency by opening multiple TCP connections in parallel and striping requests across these connections. In contrast, HTTP/2 multiplexes frames belonging to different streams onto the single byte stream provided by one TCP connection. 


Thinking Strategically Will Help You Get Ahead and Stay Ahead

Create mental space for new ideas to kick-in. Without the quiet time to sit with your thoughts, facing the uncomfortable silence, and letting your mind wander away, you cannot draw useful connections. It will not happen the first time around and probably not even the second time. But if you are persistent in your efforts, without digital and other distractions of daily life, you will start to notice new patterns of thinking. New ideas that you never thought about before will start to surface. Another great strategy is to not restrict yourself to knowledge within your current scope of work. Spend time learning about your business and industry. Meet with other functions within your organization to understand how they operate, what their challenges are and how they make decisions. All of this knowledge will enable you to apply different mental models to connect ideas from different domains thereby expanding your circle of competence and building your strategic thinking skills. Remember, building strategic thinking skills involves looking beyond the obvious and now to prodding and shaping the uncertain future.


Microsoft Azure reveals a key breakthrough toward scaling quantum computing

“It’s never been done before, and until now it was never certain that it could be done. And now it’s like yes, here’s this ultimate validation that we’re on the right path,” she said. What have researchers achieved? They have developed devices capable of inducing a topological phase of matter bookended by a pair of Majorana zero modes, types of quantum excitations first theorized about in 1937 that don’t normally exist in nature. Majorana zero modes are crucial to protecting quantum information, enabling reliable computation, and producing a unique type of qubit, called a topological qubit, which Microsoft’s quantum machine will use to store and compute information. A quantum computer built with these qubits will likely be more stable than machines built with other types of known qubits and may help solve some of the problems which currently baffle classical computers. “Figuring out how to feed the world or cure it of climate change will require discoveries or optimization of molecules that simply can’t be done by today’s classical computers


Lensless Camera Captures Cellular-Level 3D Details

At the sensor, light that comes through the mask appears as a point spread function, a pair of blurry blobs that seems useless but is actually key to acquiring details about objects below the diffraction limit that are too small for many microscopes to see. The blobs’ sizes, shapes, and distances from each other indicate how far the subject is from the focal plane. Software reinterprets the data into an image that can be refocused at will. The researchers first tested the device by capturing cellular structures in a lily of the valley, and then calcium activity in small jellyfish-like hydra. The team then monitored a running rodent, attaching the device to the rodent’s skull and then setting the animal down on a wheel. Data showed fluorescent-tagged neurons in a region of the animal’s brain, connecting activity in the motor cortex with motion and resolving blood vessels as small as 10 µm in diameter. In collaboration with Rebecca Richards-Kortum and research scientist Jennifer Carns from Rice Bioengineering, the team identified vascular imaging as a potential clinical application of the Bio-FlatScope. 


Handling Out-of-Order Data in Real-Time Analytics Applications

The solution is simple and elegant: a mutable cloud native real-time analytics database. Late-arriving events are simply written to the portions of the database they would have been if they had arrived on time in the first place. In the case of Rockset, a real-time analytics database that I helped create, individual fields in a data record can be natively updated, overwritten or deleted. There is no need for expensive and slow copy-on-writes, a la Apache Druid, or kludgy segregated dynamic partitions. A mutable real-time analytics database provides high raw data ingestion speeds, the native ability to update and backfill records with out-of-order data, all without creating additional cost, data error risk or work for developers and data engineers. This supports the mission-critical real-time analytics required by today’s data-driven disruptors. In future blog posts, I’ll describe other must-have features of real-time analytics databases such as bursty data traffic and complex queries.


CISOs face 'perfect storm' of ransomware and state-supported cybercrime

With not just ransomware gangs raiding network after network, but nation states consciously turning a blind eye to it, today's chief information security officers are caught in a "perfect storm," says Cybereason CSO Sam Curry. "There's this marriage right now of financially motivated cybercrime that can have a critical infrastructure and economic impact," Curry said during a CISO roundtable hosted by his endpoint security shop. "And there are some nation states that do what we call state-ignored sanctioning," he continued, using Russia-based REvil and Conti ransomware groups as examples of criminal operations that benefit from their home governments looking the other way. "You get the umbrella of sovereignty, and you get the free license to be a privateer in essence," Curry said. "It's not just an economic threat. It's not just a geopolitical threat. It's a perfect storm." It's probably not a huge surprise to anyone that destructive cyberattacks keep CISOs awake at night.



Quote for the day:

"Leadership means forming a team and working toward common objectives that are tied to time, metrics, and resources." -- Russel Honore