Daily Tech Digest - February 28, 2022

Follow your S curve

By the time Rogers’s seminal Diffusion of Innovations was published in 1962, the rural sociologist was convinced that the S curve of innovation diffusion depicted “a kind of universal process of social change.” Indeed, S curves have been used in many arenas since then, and Rogers’s book is among the most cited in the social sciences, according to Google Scholar. Johnson’s S Curve of Learning follows this well-established path. There’s the slow advancement toward a “launch point,” during which you canvas the (hopefully) myriad opportunities for career growth available to you and pick a promising one. Then there’s the fast growth once you hit the “sweet spot,” as you build momentum, forging and inhabiting the new you. And, finally, there is “mastery,” the stage in which you might cruise for a while, reaping the rewards of your efforts, before you start looking for something new, starting the cycle all over again. Johnson lays out six different roles that you must play as you travel along her learning curve. In the launch phase, where I spent what felt like an eternity, you first act as an Explorer, who searches for and picks a destination.


Automation: 5 issues for IT teams to watch in 2022

IT automation rarely involves IT alone. Virtually any initiative beyond the experimentation or proof-of-concept phase will involve at least two – and likely several – areas of the business. The more ambitious the goals, the truer this becomes. Good luck to the IT leaders that tackle “improve customer satisfaction ratings by X” or “reduce call wait times by Y” without involving marketing, customer service/customer experience, and other teams, for example. In fact, automation initiatives are best served by aligning various stakeholders from the very start – before specific goals (and metrics for evaluating progress toward those goals) are set. “It’s really important to identify the key benefits you wish to achieve and get all stakeholders on the same page,” says Mike Mason, global head of technology at Thoughtworks. This entails more than just rubber-stamping your way to a consensus that automation will be beneficial to the business. Stakeholders need to align on why they want to automate certain processes or workflows, what the impacts (including potential downsides) will be, and what success actually looks like. Presuming alignment on any of these issues can put the whole project at risk.


Daxin: Stealthy Backdoor Designed for Attacks Against Hardened Networks

Daxin is a backdoor that allows the attacker to perform various operations on the infected computer such as reading and writing arbitrary files. The attacker can also start arbitrary processes and interact with them. While the set of operations recognized by Daxin is quite narrow, its real value to attackers lies in its stealth and communications capabilities. Daxin is capable of communicating by hijacking legitimate TCP/IP connections. In order to do so, it monitors all incoming TCP traffic for certain patterns. Whenever any of these patterns are detected, Daxin disconnects the legitimate recipient and takes over the connection. It then performs a custom key exchange with the remote peer, where two sides follow complementary steps. The malware can be both the initiator and the target of a key exchange. A successful key exchange opens an encrypted communication channel for receiving commands and sending responses. Daxin’s use of hijacked TCP connections affords a high degree of stealth to its communications and helps to establish connectivity on networks with strict firewall rules.


Leveraging mobile networks to threaten national security

Once threat actors have access to mobile telecoms environments, the threat landscape is such that several orders of magnitude of leverage are possible in the execution of cyberattacks. An ability to variously infiltrate, manipulate and emulate the operations of communications service providers and trusted brands – abusing the trust of countless people using their services every day – derives of threat actors’ capability to weaponize ‘trust’ built into the design itself of protocols, systems, and processes exchanging traffic between service providers globally. The primary point of leverage derives of the sustained capacity of threat actors over time to acquire data of targeting value including personally identifiable information for public and private citizens alike. While such information can be gained through cyberattacks directed to that end on the data-rich network environments of mobile operators themselves, the incidence of data breaches of major data holders across industries today is such that it is increasingly possible to simply purchase massive amounts of such data from other threat actors


A Security Technique To Fool Would-Be Cyber Attackers 

Researchers demonstrate a method that safeguards a computer program’s secret information while enabling faster computation. Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.” This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key. One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.


Is API Security the New Cloud Security?

While organizations previously used APIs more sparingly, predominantly for mobile apps or some B2B traffic, “now pretty much everything is powered by an API,” Klimek said. “So of course, all of these new APIs introduce a lot of security risks, and that’s why a lot of CISOs are now paying attention.” Imperva, which Gartner named a “leader” in its web application and API protection (WAAP) Magic Quadrant, lumps API security risks into two categories, according to Klimek. The first one, technical vulnerabilities, includes a bunch of risks that can also exist in standard web applications such as the OWASP Top 10 application security risks and CVE vulnerabilities. The recent Log4j vulnerability falls into this bucket — and demonstrates how far-reaching these types of security flaws can be. Most Imperva customers tackle these API threats first, “because they tend to be some of the most acute and they require just adopting their existing application security strategies,” such as code scanning during the development process and deploying web application firewalls or runtime application self-protection technology, Klimek explained.


Inside the blockchain developers’ mind: Building a free-to-use social DApp

While we still have a pretty good user experience, telling people they have to spend money before they can use an app is a barrier to entry and winds up feeling a whole lot like a fee. I would know, this is exactly what happened on our previous blockchain, Steem. To solve that problem, we added a feature called “delegation” which would allow people with tokens (e.g. developers) to delegate their mana (called Steem Power) to their users. This way, end-users could use Steem-based applications even if they didn’t have any of the native token STEEM. But, that design was very tailored to Steem, which did not have smart contracts and required users to first buy accounts. The biggest problem with delegations is that there was no way to control what a user did with that delegation. Developers want people to be able to use their DApps for free so that they can maximize growth and generate revenue in some other way like a subscription or through in-game item sales. They don’t want people taking their delegation to trade in decentralized finance (DeFi) or using it to play some other developer’s great game like Splinterlands.


Data governance at the speed of business

Once the data governance organization has been built and its initial policies defined, you can begin to build the muscles that will make data governance a source of nimbleness that will help you anticipate issues, seize opportunities, and pivot quickly as the business environment changes and new sources of data become available. Your data governance capability is responsible for identifying, classifying, and integrating these new and changing data sources, which may come in through milestone events such as mergers or via the deployment of new technologies within your organization. It does so by defining and applying a repeatable set of policies, processes, and supporting tools, the application of which you can think of as a gated process, a sequence of checkpoints new data must pass through to ensure its quality. The first step of the process is to determine what needs to be done to introduce the new data harmoniously. Take, for example, one of our B2B software clients that acquired a complementary company and sought to consolidate the firm’s customer data. 


Irish data watchdog calls for ‘objective metrics’ for big tech regulation

Dixon said that “in some respects at least”, the DPC needs to do better and that it would be beneficial for regulators to have a “shared understanding” of what measures they are tracking. “In the absence of an agreed set of measures to determine achievements or deficiencies, the standing of the GDPR’s enforcement regime in overall terms is at risk of damage,” she said. Dixon said that this was particularly the case “when certain types of allegations” levelled against the Irish DPC “serve only to obscure the true nature and extent of the challenges” presented by the EU regulatory framework – which requires member states to legislate for the enforcement of data protection across the EU. ... That has created a vacuum and “a narrative has emerged in which the number of cases, the quantity and size of the administrative fines levied, are treated as the sole measure of success, informed by the effectiveness of financial penalties” at driving changes in behaviour.


Digital transformation: 3 roadblocks and how to overcome them

Many sectors, such as healthcare and financial services, operate within a complex web of constantly changing regulations that can be difficult to navigate. These regulations, while robust, are critical for sensitive data such as patient information in healthcare, proper execution of protocol in law enforcement, and other essential data that must be managed and used responsibly. How customer and internal data is collected, stored, managed, and used must be prioritized, especially when an enterprise transitions from legacy systems. Establishing a digital system that supports compliance with regulations is a challenge, but once the system is established, every interaction within the organization becomes data that can be monitored if you have the tools to interpret it. Knowing what is going on in every corner of an organization is central to remaining compliant, and setting up intelligent tools that can detect risk across the enterprise will ensure that your organization’s digital transformation is rooted in compliance-first strategies.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them. "-- Warren G. Bennis

Daily Tech Digest - February 27, 2022

Oh, Snap! Security Holes Found in Linux Packaging System

The first problem was the snap daemon snapd didn’t properly validate the snap-confine binary’s location. Because of this, a hostile user could hard-link the binary to another location. This, in turn, meant a local attacker might be able to use this issue to execute other arbitrary binaries and escalate privileges. The researchers also discovered that a race condition existed in the snapd snap-confine binary when preparing a private mount namespace for a snap. With this, a local attacker could gain root privileges by bind-mounting their own contents inside the snap’s private mount namespace. With that, they could make snap-confine execute arbitrary code. From there, it’s easy to start privilege escalation for an attacker to try to make it all the way to root. There’s no remote way to directly exploit this. But, if an attacker can log in as an unprivileged user, the attacker could quickly use this vulnerability to gain root privileges. Canonical has released a patch that fixes both security holes. The patch is available in the following supported Ubuntu releases: 21.10; 20.04, and 18.04. A simple system update will fix this nicely.


The DAO is a major concept for 2022 and will disrupt many industries

It is not yet clear where these disruptive technologies will lead us, but we are sure that there will be much value up for grabs. At the convergence of Web3 and NFTs lie many platforms looking to leverage technology and infrastructure to make the NFT ecosystem more decentralized, structured and community-driven. Using both social building and governance, the decentralized autonomous organization disruption is a notch higher. The DAO is one major invention that is challenging current systems of governance. Utilizing NFTs, DAOs are changing our perspective of how organizations and systems should be run, and they put further credence to the idea that the optimal form of governance does not have to do with hierarchical structures. With the principal-agent problem limiting the growth of organizations and preventing agents from feeling like part of a team, you can see why the need for decentralized organizations fostering community-inclusion is paramount. Is there something you would change about your current organization if given the chance? Leadership? 


Use the cloud to strengthen your supply chain

What’s interesting about this process is that it does not entail executives in the C-suites pulling all-nighters to come up with these innovative solutions. It’s 100% automated using huge amounts of data and machine learning and embedding these things directly within business processes so the fix happens seconds after the supply chain problem is found. These aspects of intelligent supply chain automation are not new. For years, there has been some deep thinking in terms of how to automate supply chains more effectively. Those of you who specialize in supply chains understand this far too well. How many companies are willing to invest in the innovation—and even the risk—of leveraging these new systems? Most are not, and they are seeing the downsides from the markets tossing them curveballs that they try to deal with using traditional approaches. We’re seeing companies that have been in 10th place in a specific market move to second or third place by differentiating themselves with these intelligent cloud-based systems.


Open Source Code: The Next Major Wave of Cyberattacks

When it comes to testing the resilience of your open source environment with tools, static code analysis is a good first step. Still, organizations must remember that this is only the first layer of testing. Static analysis refers to analyzing the source code before the actual software application or program goes live and addressing any discovered vulnerabilities. However, static analysis cannot detect all malicious threats that could be embedded in open source code. Additional testing in a sandbox environment should be the next step. Stringent code reviews, dynamic code analysis, and unit testing are other methods that can be leveraged. After scanning is complete, organizations must have a clear process to address any discovered vulnerabilities. Developers may be finding themselves against a release deadline, or the software patch may require refactoring the entire program and put a strain on timelines. This process should help developers address tough choices to protect the organization's security by giving clear next steps for addressing vulnerabilities and mitigating issues.


A guide to document embeddings using Distributed Bag-of-Words (DBOW) model

Beyond practising when things come to the real-world applications of NLP, machines are required to understand what is the context behind the text which surely is longer than just a single word. For example, we want to find cricket-related tweets from Twitter. We can start by making a list of all the words that are related to cricket and then we will try to find tweets that have any word from the list. This approach can work to an extent but what if any tweet related to cricket does not contain words from the list. Let’s take an example of any tweet that contains the name of an Indian cricketer without mentioning that he is an Indian cricketer. In our daily life, we may find many applications and websites like Facebook, twitter, stack overflow, etc which use this approach and fail to obtain the right results for us. To cope with such difficulties we may use document embeddings that basically learn a vector representation of each document from the whole world embeddings. This can also be considered as learning the vector representation in a paragraph setting instead of learning vector representation from the whole corpus.


Great Resignation or Great Redirection?

All this Great Resignation talk has many panicking and being reactive. We definitely shouldn’t ignore it, but we should seek to understand what is happening and why. And what the implications are for the future. The truly historical event is the revolution in how people conceive of work and its relationship to other life priorities. Even within that, there are distinctively different categories. We know service workers in leisure and hospitality got hit disproportionately hard by the pandemic. These people unexpectedly found themselves jobless, unsure how they would pay their bills and survive. Being resilient and hard-working, many — like my Uber driver — found gigs doing delivery, rideshare or other jobs giving greater flexibility and autonomy. These jobs also provided better pay than traditional service roles. Now, with their former jobs calling for their return, this group of workers has the ability to choose for themselves what they want. When Covid displaced office workers to their homes, they were bound to realize it was nice to not have that commute or the road warrior travel.


The post-quantum state: a taxonomy of challenges

While all the data seems to suggest that replacing classical cryptography by post-quantum cryptography in the key exchange phase of TLS handshakes is a straightforward exercise, the problem seems to be much harder for handshake authentication (or for any protocol that aims to give authentication, such as DNSSEC or IPsec). The majority of TLS handshakes achieve authentication by using digital signatures generated via advertised public keys in public certificates (what is called “certificate-based” authentication). Most of the post-quantum signature algorithms currently being considered for standardization in the NIST post-quantum process, have signatures or public keys that are much larger than their classical counterparts. Their operations’ computation time, in the majority of cases, is also much bigger. It is unclear how this will affect the TLS handshake latency and round-trip times, though we have a better insight now in respect to which sizes can be used. We still need to know how much slowdown will be acceptable for early adoption.


An overview of the blockchain development lifecycle

Databases developed with blockchain technologies are notoriously difficult to hack or manipulate, making them a perfect space for storing sensitive data. Blockchain software development requires an understanding of how blockchain technology works. To learn blockchain development, developers must be familiar with interdisciplinary concepts, for example, with cryptography and with popular blockchain programming languages like Solidity. A considerable amount of blockchain development focuses on information architecture, that is, how the database is actually to be structured and how the data to be distributed and accessed with different levels of permissions. ... Determine if the blockchain will include specific permissions for targeted user groups or if it will comprise a permissionless network. Afterward, determine whether the application will require the use of a private or public blockchain network architecture. Also consider the hybrid consortium, or public permissioned blockchain architecture. With a public permissioned blockchain, a participant can only add information with the permission of other registered participants.


How TypeScript Won Over Developers and JavaScript Frameworks

Microsoft’s emphasis on community also extends to developer tooling; another reason the Angular team cited for their decision to adopt the language. Microsoft’s own VS Code naturally has great support for TypeScript, but the TypeScript Language Server provides a common set of editor operations — like statement completions, signature help, code formatting, and outlining. This simplifies the job for vendors of alternative IDEs, such as JetBrains with WebStorm. Ekaterina Prigara, WebStorm project manager at JetBrains, told the New Stack that “this integration works side-by-side with our own support of TypeScript – some of the features of the language support are powered by the server, whilst others, e.g. most of the refactorings and the auto import mechanism, by the IDE’s own support.” The details of the integration are quite complex. Continued Prigara, “Completion suggestions from the server are shown but they could, in some cases, be enhanced with the IDE’s suggestions. It’s the same with the error detection and quick fixes. Formatting is done by the IDE. Inferred types shown on hover, if I’m not mistaken, come from the server. ...”


Developing and Testing Services Among a Sea of Microservices

The first option is to take all of the services that make up the entire application and put them on your laptop. This may work well for a smaller application, but if your application is large or has a large number of services, this solution won’t work very well. Imagine having to install, update, and manage 500, 1,000, or 5,000 services in your development environment on your laptop. When a change is made to one of those services, how do you get it updated? ... The second option solves some of these issues. Imagine having the ability to click a button and deploy a private version of the application in a cloud-based sandbox accessible only to you. This sandbox is designed to look exactly like your production environment. It may hopefully even use the same Terraform configurations to create the infrastructure and get it all connected, but it will use smaller cloud instances and fewer instances, so it won’t cost as much to run. Then, you can link your service running on your laptop to this developer-specific cloud setup and make it look like it’s running in a production environment.



Quote for the day:

"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis

Daily Tech Digest - February 26, 2022

How To Study and Learn Complex Software Engineering Concepts

Chunking is a powerful technique to learn new concepts by breaking big and complex subjects down into smaller, manageable units that represent the core concepts you need to master. Let’s say you would like to start your Data Science journey. Grab a book or find a comprehensive online curriculum on the subject and begin by scanning the table of contents and skim-reading the chapters by browsing the headers, sub-headers and illustrations. This allows you to get a feel of what material you are about to explore and make mental observations on how it is organised as well as start appreciating what the big picture looks like, so you can then fill in the details later. After this first stage, you need to start learning the ins and outs of the individual chunks. It is not as intimidating as you originally thought, as you have already formed an idea of what you will be studying. So, carrying on our previous example, you can go through the book chapters in-depth, and then supplement your knowledge by looking at Wikipedia, watching video tutorials, finding online resources, and taking extensive notes along the way. 


RISC-V AI Chips Will Be Everywhere

The adoption of RISC-V, a free and open-source computer instruction set architecture first introduced in 2010, is taking off like a rocket. And much of the fuel for this rocket is coming from demand for AI and machine learning. According to the research firm Semico, the number of chips that include at least some RISC-V technology will grow 73.6 percent per year to 2027, when there will be some 25 billion AI chips produced, accounting for US $291 billion in revenue. The increase from what was still an upstart idea just a few years ago to today is impressive, but for AI it also represents something of a sea change, says Dave Ditzel, whose company Esperanto Technologies has created the first high-performance RISC-V AI processor intended to compete against powerful GPUs in AI-recommendation systems. According to Ditzel, during the early mania for machine learning and AI, people assumed general-purpose computer architectures—x86 and Arm—would never keep up with GPUs and more purpose-built accelerator architectures.


Sustainable architectures in a world of Agile, DevOps, and cloud

Driving architectural decisions is an essential activity in Continuous Architecture, and architectural decisions are the primary unit of work of a practitioner. Almost every architectural decision involves tradeoffs. For example, a decision made to optimize the implementation of a quality attribute requirement such as performance may negatively impact the implementation of other quality attributes, such as usability or maintainability. An architectural decision made to accelerate the delivery of a software system may increase technical debt, which needs to be “repaid” at some point in the future and may impact the sustainability of the system. Finally, all architectural decisions affect the cost of the system, and compromises may need to be made in order to meet the budget allocated to that system. All tradeoffs are reflected in the executable code base. Tradeoffs often are the least unfavorable ones rather than the optimal ones because of constraints beyond the team’s control, and decisions often need to be adjusted based on the feedback received from the system stakeholders.


6 Cyber-Defense Steps to Take Now to Protect Your Company

Modern device management is an essential part of increasing security in remote and hybrid work environments. A unified endpoint management (UEM) approach fully supports bring-your-own-device (BYOD) initiatives while maximizing user privacy and securing corporate data at the same time. UEM architectures usually include the ability to easily onboard and configure device and application settings at scale, establish device hygiene with risk-based patch management and mobile threat protection, monitor device posture and ensure compliance, identify and remediate issues quickly and remotely, automate software updates and OS deployments, and more. Choose a UEM solution with management capabilities for a wide range of operating systems, and one that is available both on-premises and via software-as-a-service (SaaS). ... Companies should look to combat device vulnerabilities (jailbroken devices, vulnerable OS versions, etc.), network vulnerabilities and application vulnerabilities (high security risk assessment, high privacy risk assessment, suspicious app behavior, etc.).


Europe proposes rules for fair access to connected device data

The Data Act looks to be a key component of the EU’s response to that threat. ...  Secondly, the Commission is concerned about abusive contractual terms being imposed on smaller companies by more powerful platforms and market players to, essentially, extract the less powerful company’s most valuable data — so the Data Act will bring in a “fairness test” with the goal of protecting SMEs against unfair contractual terms. The legislation will stipulate a list of unilaterally imposed contractual clauses that are deemed or presumed to be unfair — such as a clause that states a company can unilaterally interpret the terms of the contract — and those that do not pass the test will be not be binding on SMEs. The Commission says it will also develop and recommend non-binding model contractual terms, saying these standard clauses will help SMEs negotiate “fairer and balanced data sharing contracts with companies enjoying a significantly stronger bargaining position” Some major competition complaints lodged against tech giants in the EU have concerned their access to third party data, such as the investigation into Amazon’s use of merchants data


Mind of its own: Will “general AI” be like an alien invasion?

Yes, we will have created a rival and yet we may not recognize the dangers right away. In fact, we humans will most likely look upon our super-intelligent creation with overwhelming pride — one of the greatest milestones in recorded history. Some will compare it to attaining godlike powers of being able to create thinking and feeling creatures from scratch. But soon it will dawn on us that these new arrivals have minds of their own. They will surely use their superior intelligence to pursue their own goals and aspirations, driven by their own needs and wants. It is unlikely they will be evil or sadistic, but their actions will certainly be guided by their own values, morals, and sensibilities, which will be nothing like ours. Many people falsely assume we will solve this problem by building AI systems in our own image, designing technologies that think and feel and behave just like we do. This is unlikely to be the case. Artificial minds will not be created by writing software with carefully crafted rules that make them behave like us. 


5 ITSM hurdles and how to overcome them

Unclear communication makes it far more difficult to explain the value of ITSM to the business, to properly organize ITSM efforts, to set expectations for its deployment and to secure proper funding for it. Hjortkjær suggests using the CMDB to map IT components to business applications, assign ownership of those applications to both IT and business sponsors, and ask those sponsors to explain the role of each application to the business, as well as how best to use it and eventually when to replace it. Thomas Smith, director of telecommunications and IT support at funeral goods and services provider Service Corp. International, recommends being candid about schedules. “One of the biggest mistakes we made in the past, and still make, is to say `We’re going to get it done in three months.’ Four months later, everyone is still hoping for three months,” he says. Understand any deficiencies in your ITSM tool or services, he recommends, “and tell the business process owners `We have a plan to address it.’” Calvo says the terms of SLAs, such as those it created using BMC’s HelixITSM platform, can help set expectations and reduce frustration from users who “think everything should be solved ASAP.”


Data Mapping Best Practices

Many applications share the same pattern of naming common fields on the frontend but under the hood, these same fields can have quite different labels. Consider the field “Customers”: in the source code of your company’s CRM, it might still have the label “customers”, but then your ERP system calls it “clients”, your finance tool calls it “customer” and the tool your organization uses for customer messaging will map it “users” altogether. This is one of the probably most common data mapping examples for this label conundrum. To add to the complexity, what if a two-field data output from one system is expected as a one-field data input in another or vice versa? This is what commonly happens with First Name / Last Name; a certain customer “Allan” “McGregor” from your eCommerce system will need to become “Allan McGregor” in your ERP. Or my favorite example: the potential customer email address submitted through your company’s website will need to become “first-name: Steven”, "last-name: Davis” and “company: Rangers” in your customer relationships management tool.
 

How to perform Named Entity Recognition (NER) using a transformer?

Named entities can be of different classes like Virat Kohli is the name of a person and Lenovo is the name of a company. The process of recognizing such entities with their class and specification can be considered as Named Entity Recognition. In traditional ways of performing NER, we mostly find usage of spacy and NLTK. There can be a variety of applications of NER in natural language processing. For example, we use this for summarizing information from the documents and search engine optimization, content recommendation, and identification of different Biomedical subparts processes. In this article, we aim to make the implementation of NER easy and using transformers like BERT we can do this. Implementation of NER will be performed using BERT, so we are required to know what BERT is, which we will explain in our next section. In one of the previous articles, we had a detailed introduction to BERT. BERT stands for Bidirectional Encoder Representations from Transformers. It is a famous transformer in the field of NLP. This transformer is a pre-trained transformer like the others.


Using artificial intelligence to find anomalies hiding in massive datasets

To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample. They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns inartifi the data and estimate anomalies more accurately, Chen explains. “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says. This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. 



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - February 25, 2022

Once you know your team’s current skillsets, you are ready to identify the gaps between the skills you have and the skills you need. This requires peering into your crystal ball to anticipate the skills you will need to be future-ready. You can do this by benchmarking your team’s skills against the skills hired by your most innovative peers, gathering data on the skills that are growing fastest in your industry, and assessing the capabilities you need to meet your business goals. ... The last step is to determine whether it is more efficient to buy or build the skills your team needs by comparing the effort each of these will take. To assess the effort to buy a skill, consider how prevalent it is in the market and whether it is likely to come with a salary premium. Is it a common skill that many workers have, or is it still rare? Will requesting that skill drive up the salary you must pay? The lower a skill’s supply and the higher its salary premium, the greater your training ROI is likely to be if you can build that skill internally. Even if the effort to build a skill is prohibitive, you may still need to buy it, even if it is hard or expensive to find. 


Cybersecurity burnout is real. And it's going to be a problem for all of us

Burnout threatens cybersecurity in multiple ways. First, on the employee side. "Human error is one of the biggest causes of data breaches in organisations, and the risk of causing a data breach or falling for a phishing attack is only heightened when employees are stressed and burned out," says Josh Yavor, chief information security officer (CISO) at enterprise security solutions provider Tessian. A study conducted by Tessian and Stanford University in 2020 found that 88% of data breach incidents were caused by human error. Nearly half (47%) cited distraction as the top reason for falling for a phishing scam, while 44% blamed tiredness or stress. "Why? Because when people are stressed or burned out, their cognitive load is overwhelmed and this makes spotting the signs of a phishing attack so much more difficult," Yavor tells ZDNet. Threat actors are wise to this fact, too: "Not only are they making spear-phishing campaigns more sophisticated, but they are targeting recipients during the afternoon slump, when people are most likely to be tired or distracted. Our data showed that most phishing attacks are sent between 2pm and 6pm."


Master Data Management in Data Mesh

The importance of master data management is obvious: users can only make the correct decisions if the data they use is consistent and correct. MDM ensures consistency and quality on cross-domain level. Organizations need to find a balance. Introducing too many areas of master data or reference values will introduce too much cross-domain alignment. No enterprise data at all makes it impossible to compare any results. A practical way to begin implementing MDM into your organization is to start with the simplest way of master data management: implementing a repository. With a repository you can quickly deliver value by learning what data needs to be aligned or is of bad quality without adjusting any of your domain systems. A next step will be setting clearer scope. Don’t fall into the trap of enterprise data unification by selecting all data. Start with subjects that add most value, such as customers, contracts, organizational units, or products. Only select the most important fields to master. The number of attributes should be in the tens, not the hundreds.


Why Mutability Is Essential for Real-Time Data Analytics

Data warehouses popularized immutability because it eased scalability, especially in a distributed system. Analytical queries could be accelerated by caching heavily-accessed read-only data in RAM or SSDs. If the cached data was mutable and potentially changing, it would have to be continuously checked against the original source to avoid becoming stale or erroneous. This would have added to the operational complexity of the data warehouse; immutable data, on the other hand, created no such headaches. Immutability also reduces the risk of accidental data deletion, a significant benefit in certain use cases. Take health care and patient health records. Something like a new medical prescription would be added rather than written over existing or expired prescriptions so that you always have a complete medical history. More recently, companies tried to pair stream publishing systems such as Kafka and Kinesis with immutable data warehouses for analytics. The event systems captured IoT and web events and stored them as log files.


Mark Zuckerberg wants to build a voice assistant that blows Alexa and Siri away

While Meta’s Big Tech competitors — Amazon, Apple, and Google — already have popular voice assistant products, either on mobile or as standalone hardware like Alexa, Meta doesn’t. “When we have glasses on our faces, that will be the first time an AI system will be able to really see the world from our perspective — see what we see, hear what we hear, and more,” said Zuckerberg. “So the ability and expectation we have for AI systems will be much higher.” To meet those expectations, the company says it’s been developing a project called CAIRaoke, a self-learning AI neural model (that’s a statistical model based on biological networks in the human brain) to power its voice assistant. This model uses “self-supervised learning,” meaning that rather than being trained on large datasets the way many other AI models are, the AI can essentially teach itself. “Before, all the blocks were built separately, and then you sort of glued them together,” Meta’s managing director of AI research, Joëlle Pineau, told Recode. “As we move to self-supervised learning, we have the ability to learn the whole conversation.”


Samsung Shattered Encryption on 100M Phones

Paul Ducklin, principal research scientist for Sophos, called out Samsung coders for committing “a cardinal cryptographic sin.” Namely, “They used a proper encryption algorithm (in this case, AES-GCM) improperly,” he explained to Threatpost via email on Thursday. “Loosely speaking, AES-GCM needs a fresh burst of securely chosen random data for every new encryption operation – that’s not just a ‘nice-to-have’ feature, it’s an algorithmic requirement. In internet standards language, it’s a MUST, not a SHOULD,” Ducklin emphasized. “That fresh-every-time randomness (12 bytes’ worth at least for the AES-GCM cipher mode) is known as a ‘nonce,’ short for Number Used Once – a jargon word that cryptographic programmers should treat as an *command*, not merely as a noun.” Unfortunately, Samsung’s supposedly secure cryptographic code didn’t enforce that requirement, Ducklin explained. “Indeed, it allowed an app running outside the secure encryption hardware component not only to influence the nonces used inside it, but even to choose those nonces exactly, deliberately and malevolently, repeating them as often as the app’s creator wanted.”


Big banks will blaze the enterprise GPT-3 AI trail

The use cases for GPT-3 in financial services are broad and already encompassed in specific machine-learning packages. For instance, sentiment analysis (using social media and articles to capture the temperature of the market), entity recognition (classification of documents), and translation are all widely available and used. Where GPT-3 will likely come into play for banks is in language generation – the ability to handle claims and fill information into forms, for example. This might be a small, consumer-focused start, but with enough training data, GPT-3 could start taking an active role in risk management and investment decisions. Getting a handle on the current return on RoI for this tech in banking is difficult. These ML elements exist, but as data volumes grow, the need for massive industry and even bank-specific trained models is clearer. One big problem for financial institutions able to access the model (OpenAI is less closed these days but GPT-3 is limited in terms of pre-training, downstream task fine-tuning, plus no industry-specific corpus) is finding the people to make it all work.


A New Dawn: Blockchain Tech Is Rising On The Auto Horizon

Given an accident between two vehicles, multiple pieces of information can be easily shared and/or recorded for financial transactions including insurance coverage, the costs of subsequent repairs or medical bills, the percent culpability by both parties, etc. Over time, this creates a digital history of a vehicle, which can be used to avoid deceit. “Fraud is big expense for insurance carriers,” explains Shivani Govil, Chief Product Officer at CCC Intelligent Solutions, “In the U.S. alone, fraud costs the insurance industry over $40 billion annually. ... In the coming years, manufacturers, governments, repair shops and suppliers will need to track cybersecurity certifications based upon software versions of every part, especially since soon some countries will requires proof of cybersecurity management systems and software update capability. The vast Bill of Materials for a given vehicle might include hundreds of parts with different versions software which may have been replaced last week after an accident. Does the replacement part have updated code? 


Touchless tech: Why gesture-based computing could be the next big thing

This move towards new forms of interaction is a trend that resonates with Mia Sorgi, director of digital product and experience at food and drink giant PepsiCo Europe, whose company ran a gesture-based project recently that allowed customers in a KFC restaurant to be served by moving their hands, with no contact required. "I'm really proud of the work we did here," she says. "I believe that gesture is a very important emerging interface option. I think it is something that we will be doing more of in the future. I think it's really valuable to get an understanding of how to win in that space, and how to create something successful that people can use." While PepsiCo's gesture-based project received fresh impetus during the pandemic, Sorgi explains to ZDNet that the company has been experimenting with touchless technology for the past three years. Those initial investigations into gesture were scaled up and explored in a business environment last year.


What the retail sector can learn from supply chain disruption

Technology has undoubtably become influential in all aspects of our daily lives and has hit every part of the retail ecosystem. The impacts of the pandemic created a boom of online shoppers, who aren’t going away any time soon. Consumers are now more inclined to search and buy products online, with unlimited options as the tantalising grip – retailers though must endeavour to continuously adapt to keep ever with ever evolving customer demands. This goes for physical stores too, as customers that become more tech savvy increasingly expect brick-and-mortar stores to keep up with exciting and new digital innovations. An additional part of this retail ecosystem that can be revolutionised by technology is the industry’s operations, which can and should be stabilised and made more efficient. Whilst no retailer can predict what will happen in the future, they can invest in technology that helps cope with erratic seasonal or supply rise and falls and help prepare for whatever lies ahead. Investing in software solutions that can help to stabilise operations and prepare for unknown terrains is a good place to start.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance." -- Thom S. Rainer

Daily Tech Digest - February 24, 2022

Yann LeCun: AI Doesn​’t Need Our Supervision

Self-supervised learning (SSL) allows us to train a system to learn good representation of the inputs in a task-independent way. Because SSL training uses unlabeled data, we can use very large training sets, and get the system to learn more robust and more complete representations of the inputs. It then takes a small amount of labeled data to get good performance on any supervised task. This greatly reduces the necessary amount of labeled data [endemic to] pure supervised learning, and makes the system more robust, and more able to handle inputs that are different from the labeled training samples. It also sometimes reduces the sensitivity of the system to bias in the data—an improvement about which we’ll share more of our insights in research to be made public in the coming weeks. What’s happening now in practical AI systems is that we are moving toward larger architectures that are pretrained with SSL on large amounts of unlabeled data. These can be used for a wide variety of tasks. For example, Meta AI now has language-translation systems that can handle a couple hundred languages.


Leading from the top to create a resilient organisation

In the rush to keep operations going, many businesses made quick decisions and often, adopted the wrong services for their organisation. Our own research found that over half (53%) of UK IT decision makers believe they made unnecessary tech investments during the Covid-19 pandemic, and by speeding up or ignoring their original strategy, have hindered their long term resilience. One thing almost all businesses have recognised throughout the pandemic, is that their people are the most critical and limiting factor to their business. Employee time is valuable and by not having technology that supports them in their role, productivity will drop, and employees may become an internal threat in terms of cyber security. If businesses acknowledge that hybrid is the new normal, and their people should be the priority, they can go some way to understand how IT moves from an expense to adding value. Although most of this has stemmed from a pandemic no one could have predicted, businesses and their leaders must now make sure they haven’t created the perfect storm of a distributed, disconnected workforce that is at risk of service outages.


Details of NSA-linked Bvp47 Linux backdoor shared by researchers

The attacks employing the Bvp47 backdoor are dubbed as 'Operation Telescreen' by Pangu Lab. A telescreen was a device envisioned by George Orwell in his novel 1984 that enabled the state to remotely monitor others to control them. According to Pangu Lab researchers, the malicious code of Bvp47 was developed to give operators long-term control over compromised machines. 'The tool is well-designed, powerful, and widely adapted. Its network attack capability equipped by 0-day vulnerabilities was unstoppable, and its data acquisition under covert control was with little effort,' they said. Complex code, Linux multi-version platform adaption, segment encryption and decryption and extensive rootkit anti-tracking mechanisms are all part of Bvp47's implementation. It also features an advanced BPF engine, which is employed in advanced covert channels, as well as a communication encryption and decryption procedure. The researchers say the attribution to the Equation Group is based on the fact the sample code shows similarities with exploits contained in the encrypted archive file 'eqgrp-auction-file.tar.xz.gpg' which was posted by the Shadow Brokers after the failed auction in 2016.


Cloud computing vs fog computing vs edge computing: The future of IoT

Cloud computing is the process of delivering on-demand services or resources over the internet that allows users to gain seamless access to resources from remote locations without expending any additional time, cost or workforce. Switching from building in-house data centres to cloud computing helps the company reduce its investment and maintenance costs considerably. ... Fog computing is a type of computing architecture that utilises a series of nodes to receive and process data from IoT devices in real-time. It is a decentralised infrastructure that provides access to the entry points of various service providers to compute, store, transmit and process data over a networking area. This method significantly improves the efficiency of the process as the time utilised in the transmission and processing of data is reduced. In addition, the implementation of protocol gateways ensures that the data is secure. ... Cloud or fog data prove to be unreliable when dealing with applications that require instantaneous responses with tightly managed latency. Edge computing deals with processing persistent data situated near the data source in a region considered the ‘edge’ of the apparatus.


Data Unions Offer a New Model for User Data

One of the promises of a decentralized Web3 is the notion that as users we can all own our data. This is in contrast to Web 2.0, where the prevailing view is that we the users and our data are the product being exploited for financial gain by large centralized organizations. A data union is a scalable way to collect real-time data from individuals and package that data for sale, in a way that is mutually agreeable to both the data source and the packaging application. Much like workers joining a union in real life to rally around a common set of goals, data unions allow individuals to join these unions to aggregate data in a controlled way, complete with the ability to vote on how and where the data is used, through DAO (decentralized autonomous organization) governance. For users, one challenge to the idea of controlling your data is finding an interested buyer. Few data consumers want to go through the hassle of acquiring data from one individual at a time. Data unions solve this by aggregating data from a set of users who opt-in. 


How to protect your Kubernetes infrastructure from the Argo CD vulnerability

In terms of the impact of this vulnerability, Apiiro has determined the following (so far). Note that the following information was from Apiiro’s website at the time of the announcement and may be subject to change. Please refer to Apiiro’s website for the latest information. Here’s what we know about the vulnerability and what it could enable an attacker: The attacker can read and exfiltrate secrets, tokens, and other sensitive information residing on other applications; The attacker can “move laterally” from their application to another application’s data. The risk was given a severity rating of high given that the malicious Helm chart could potentially expose sensitive information stored on a Git repository and also “roam” through applications allowing attackers to read secrets, tokens, and sensitive data that reside within the applications. The team behind Argo CD quickly provided a patch that impacted organizations should apply as soon as possible as the vulnerability affects all versions of the tool. The patch is available via Argo CD’s GitHub repository.


Understanding your automation journey

In order to achieve shorter-term automation goals, businesses need to evaluate their existing automation needs and ask a few key questions. Are they seeking to automate mundane tasks to increase personal productivity, such as processing emails, setting up notifications or organising files? Personal productivity automation is employee-driven and used to tackle multiple tasks for productivity gains at the individual level. Are they seeking to streamline business processes, such as processing a high volume of invoices or moving data from one system to another? Business process automation (BPA) is also employee-driven but it streamlines business processes to deliver efficiencies and productivity gains across users and departments. Automation might also be an ongoing project, often referred to as an automation Centre of Excellence (CoE), which focuses on intricate, enterprise-wide automation and orchestration. CoE-driven automation is fairly complicated and has a significant influence on automating connected processes.


Going Digital in the Middle of a Pandemic

Independent work-streams allowed them to work in parallel. Does that mean we did not have any dependencies? Not really. We had a stand-up which we called as Scrum of Scrum, conducted daily, with participation from each development team, with focus on dependencies and impediment resolution during the iteration. Given the nature of program and diverse set of stakeholders, we decided to conduct consolidated program iteration planning and showcase events. Development teams would conduct their planning meetings individually. And join this program meeting to share summary of key features taken up in the iteration, and the sprint goal. Lastly, to provide stakeholders a view of how we were progressing against defined release milestones, we tracked progress against iteration goals vis-à-vis release objectives. A release was defined as a set of features required to board users from a specific Geography. We provided a one-page weekly/fortnightly program summary to senior CIO leadership and program stakeholders, with data from ALM tool, along with any blockers & issues that needed executive leadership support.


Cyber Insurance's Battle With Cyberwarfare: An IW Special Report

While the clauses were issued in the company’s marketing association bulletin and allowed individual underwriters flexibility in applying them to individual policies, they were widely interpreted as signifying a shift toward non-coverage. All of Lloyd’s cyber policies are expected to include some variation of these clauses going forward. Lloyd's of London's definition of cyberwar broadly includes “cyber operations between states which are not excluded by the definition of war, cyber war or cyber operations which have a major detrimental impact on a state.” Formal attribution is not necessary for exclusion, an important caveat that would allow for broad latitude in making determinations of whether a given event is actually cyberwar or not. “I think you're going to see a lot more of that, unless there is legislation that comes out that more specifically defines cyberwar. I don't think we're really seeing it at this point,” notes Adrian Mak, CEO of AdvisorSmith. The language in the individual contracts is “what is driving the coverage at this point. And also, interpretation of that [language].”


Digital transformation: Do's and don'ts for IT leaders to succeed

Fear is a natural reaction when we enter uncharted territory. Moreover, the digital transformation journey also requires skill, patience, and a huge financial investment, which adds an extra level of anxiety. Many leaders are uncertain about investing resources into an initiative that they are unsure of, even if there are plenty of stats available to back it up. If you are feeling uncomfortable, try to focus your energy toward embracing your digital transformation initiative and giving it everything it needs to succeed. Remind yourself that in time, you will witness the positive results of your efforts and even scale your business’s revenue. Every enterprise and organization must eventually make digitalization a strategic cornerstone to remain competitive and better serve their constituents. If convenience, scalability, and security are among your business priorities, implementing a thoughtful digital transformation initiative is essential.



Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson

Daily Tech Digest - February 23, 2022

The Metaverse Is Coming; We May Already Be in It

The metaverse has moved beyond science fiction to become a “technosocial imaginary,” a collective vision of the future held by those with the power to turn that vision into reality. Facebook recently changed its name to Meta and committed $10 billion to build out metaverse-related technology. Microsoft just announced that it was spending a record-breaking $69 billion to buy Activision Blizzard, the makers of some of the most popular massively multiplayer online games in the world, including World of Warcraft. This current vision of the metaverse goes well beyond the simple VR of my ping-pong game to eventually include augmented reality (or AR, where smart glasses project objects onto the physical world), portable digital goods and currency in the form of nonfungible tokens (NFTs) and cryptocurrency, realistic AI characters that can pass the Turing test, and brain-computer interface (BCI) technology. BCIs will eventually allow us to not only control our avatars via brain waves, but eventually, to beam signals from the metaverse directly into our brains, further muddying the waters of what is real and what is virtual.


Using Machine Learning for Fast Test Feedback to Developers and Test Suite Optimization

The necessary step of integrating source control and test result data opens up an “incidental” use case concerning the correct routing of defects in multi-team environments. Sometimes there are defects/bugs where it is not clear which team they should be assigned to. Typically, if you have more than two teams it can be cumbersome to find the correct team to take care of a fix. This can lead to a kind of defect ping-pong between the teams because no one feels responsible until the defect is finally assigned to the correct team. Since the Healthineers data also contains change management logs, there is information about defects and their fixes, e.g. which team performed a fix or which files were changed. In many cases, there are test cases connected to a defect - either existing ones when a problem is found in a test run before release or new tests added because a test gap was identified. This allows tackling the problem of this “defect hot potato”. Defects can be related to test cases in several ways, for example if a test case is mentioned in the defect’s description or if the defect management system allows explicit links between defects and test cases. 

Curious about quantum computing

As technologists, it’s our responsibility to also keep an eye on these advancements—to learn where they’re headed, to steer our business partners toward the right use cases for them, and even to help shape what they become. Quantum computing is one such technology. I find the very idea of quantum computing fascinating. It takes computer science—the hardware and software that we created in the computer industry—and blends in the fundamentals of nature, physics, and other observed sciences. I believe quantum computing is an area that will fundamentally change the world around us… eventually. But I also find that there’s a lot of hype and misinformation around quantum computing, with only a handful of experts truly in a position to discuss its current state (did you catch what I did there?). I wanted to cut through the hype and go straight to one of these experts myself to get a better understanding of where quantum computing is today and where it’s headed in the future. Introducing, Dr. John Preskill. Dr. John Preskill is a pioneer in the field of quantum computing. He is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, where he is also the Director of the Institute for Quantum Information and Matter.


Is Serverless Just a Stopover for Event-Driven Architecture?

Serverless does illustrate many desirable traits. It is easy to scale up and scale down. It’s triggered by events that are pushed rather than via a polling mechanism. Functions only consume resources based on that job’s needs, then exits and frees up resources for other workloads. Developers benefit from the abstraction of infrastructure and could deploy code easily via their CI/CD pipelines without concern as to how to provision resources. However, the point that Aniszczyk alludes to is that serverless isn’t designed for many situations including long-running applications. They can actually be more expensive to the end user than running a dedicated application in containers, a VM or on bare metal. As an opinionated solution, it forces developers into the model facilitated by the vendor. In addition, serverless doesn’t have an easy way to handle state. Finally, though serverless deployments are largely deployed in the cloud, they aren’t easily deployed across cloud providers. The tooling and mechanisms for managing serverless are very much specific to the cloud, though perhaps with the donation of Knative to the CNCF, there could be a serverless platform that could be developed and deployed with the support of the industry, much like Kubernetes has.


Why Big Tech is losing talent to crypto, web3 projects

Another example of a high-profile person leaving big tech for crypto is John deVadoss, former Managing Director (MD) at Microsoft, where he spent about 16 years of his career in a variety of roles, for example General Manager (GM) overseeing the developer platform Microsoft.NET, and most recently building Microsoft Digital from zero to half a billion dollars of business worldwide. “I built and led Architecture strategy for .NET at Microsoft; I built the first enterprise frameworks and tools for Visual Studio .Net; I lead Microsoft’s first application platform product line and strategy, and I also worked on the Azure developer experience, long before it was called Microsoft Azure,” says deVadoss in an interview with CryptoSlate. After all these years at Microsoft, deVadoss went for Neo – the “Chinese Ethereum” blockchain with high ambitions indeed. ... “I have worked on developer platforms and tools for over 25 years, and it was a natural move to build the blockchain industry’s best developer tools and experience for Neo N3, the first polyglot blockchain platform in the industry and the most developer-friendly,” deVadoss says.


Blockchain: The game-changing technology that’s about to disrupt almost every industry

Blockchain technology can offer effective solutions to banks and non-banking financial institutions (NBFCs) to improve their payment clearing and credit information systems. It can also enhance the security of online banking transactions. With blockchain, banks could combine their payment protocols with smart contracts, and this would allow them to establish multiple data points on each transaction. These data points would further enable banks to monitor their loans, track transactions, and easily manage their invoicing and financing-related activities. In a blockchain-based banking system, each user can be provided with a private key for every transaction on the ledger; this key works like a unique digital signature. So at any point, if a banking record is altered, the digital signature is rendered invalid, and the whole banking network is notified of the anomaly. ... Cryptocurrencies provide an alternative to traditional banking for people who remain unbanked, for various reasons. There use has also been suggested as a way to decouple currencies from the traditional monetary systems. For example, the hyperinflation that began in Venezuela in 2016 resulted in a steep devolution of the nation’s currency.


Behind the stalkerware network spilling the private phone data of hundreds of thousands

TechCrunch first discovered the vulnerability as part of a wider exploration of consumer-grade spyware. The vulnerability is simple, which is what makes it so damaging, allowing near-unfettered remote access to a device’s data. But efforts to privately disclose the security flaw to prevent it from being misused by nefarious actors has been met with silence both from those behind the operation and from Codero, the web company that hosts the spyware operation’s back-end server infrastructure. The nature of spyware means those targeted likely have no idea that their phone is compromised. With no expectation that the vulnerability will be fixed any time soon, TechCrunch is now revealing more about the spyware apps and the operation so that owners of compromised devices can uninstall the spyware themselves, if it’s safe to do so. Given the complexities in notifying victims, CERT/CC, the vulnerability disclosure center at Carnegie Mellon University’s Software Engineering Institute, has also published a note about the spyware.



Matter, explained: What is the next-gen smart home standard?

Matter uses a wireless technology based on Internet Protocol (IP), which Wi-Fi routers use to assign an address to your connected devices. There are no awkward handoffs or other wireless technologies to deal with by natively integrating an IP-based protocol for smart home devices. It paves the way forward to a future where all Matter certified devices will work alongside each other in synchronous harmony. As you can see, bringing our smart home devices together like this not only makes setup a breeze, but it's absolutely essential when designing a single universal smart home environment that just works. The ultimate goal here is to create a "set it and forget it" situation where these devices essentially fade into the background rather than sit in the foreground. Thankfully, Matter sounds like the thing we need to finally bridge that gap and fix the smart home situation once and for all. We have some of the biggest tech giants working together to make Matter a unified protocol in our smart homes of the future.


Mitigating Risks in Cloud Native Applications

As the shift to the work-from-anywhere model becomes mainstream and cloud applications continue to surge, it is redefining new developments like “security and observability is converging,” said Tipirneni. While DevOps and IT security have traditionally been treated as separate disciplines, their roles and responsibilities are increasingly moving toward the DevSecOps trend. “Solving the security problem and observability problem is your ability to instrument everything that is happening in the system at a very fine-grained level — from gathering the data and really making sense of the data,” said Tipirneni. “Developers try to work around security controls that are complex but bringing those two together puts the power in the developers’ hands” he added. Information security and development teams have traditionally managed Tigera’s solutions like Calico and Envoy, but for cloud-first companies who do not have legacy applications “DevOps, Cloud Ops engineers are pretty much responsible end to end,” said Tipirneni. From deploying applications to troubleshooting and managing compliance and security, “the challenge they have is that there’s just way too much on their plate to do,” Tipirneni added.


NFT use cases for businesses

NFTs have also shown capability of showing organisations the interests of their customers, without marketing teams needing to scour Internet usage data. In time, NFTs could be utilised to learn more about what customers need, before a product is purchased. Conor Svensson, founder and CEO of Web3 Labs, said: “I believe the true inflection point of adoption will be when the majority of smartphone users hold them. Whilst the technology is there to do this currently, only a minority of people keep NFTs on them. This will be key for true mass adoption. “An NFT can represent any real-world or virtual good, as it stands the greatest value outside of financial for them is the communities that are forming around holders of them. This is a marketeer’s dream, as prior to NFTs it wasn’t easy to learn a person was interested in a product or brand unless they purchased it or engaged with it by signing up for email updates, liking Twitter posts, etc. “The NFTs a person holds in a wallet can be viewed as an expression of their interests, and the fact that this is public information is a powerful tool for targeting individuals and communities.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg