Daily Tech Digest - March 06, 2022

Decentralized identity using blockchain

Let’s assume an online shopping scenario where the required data will transit from the wallet associated with the decentralized identity. The wallet in this scenario contains the verified identity, address, and financial data. The users share identity data to log in with the website by submitting the required information from the identity wallet. They are authenticated with the website without sharing the actual data. The same scenario applies to the checkout process; a user can place an order with the address and payment source already verified in his identity wallet. Consequently, a user can go through a smooth and secure online shopping experience without sharing an address or financial data with an ecommerce website owner. ... Blockchain technology uses a consensus approach to prove the data authenticity through various nodes and acts as the source of trust to verify user identity. Along with the data, each block also contains a hash that changes if someone tempers the data. These blocks are a highly-encrypted list of transactions or entries shared across all the nodes distributed throughout the network.


Breach Notification: Poor Transparency Complicates Response

Unfortunately, data breach experts continue to see increasing transparency shortfalls, both from organizations that fall victim and from regulators. In 2020, for example, 209 consumer breach notifications lacked important details, while in 2021, 607 breaches lacked such details. So says the Identity Theft Resource Center, a nonprofit organization based in San Diego, California, that provides no-cost assistance to U.S. identity theft victims to help resolve their cases (see: Data Breach Trends: Global Count of Known Victims Increases). "The lack of actionable information in breach notices prevents consumers from effectively judging the risks they face of identity misuse and taking the appropriate actions to protect themselves," ITRC says in its latest Annual Data Breach Report, looking at 2021 trends. "A decrease in timely notices posted by states, including one state that updated breach notices in December 2021 for the first time since the fall of 2020, also prevents consumers from taking action to protect themselves and organizations that assist identity crime victims from offering timely, effective advice."


Entrepreneurship for Engineers: How to Build a Community

You probably already know that understanding and being able to articulate your product’s value proposition is critical to successful sales and marketing — but your community needs to add value, too, above and beyond the value that the product/project provides. “No one wakes up in the morning and thinks ‘I’m going to go and answer questions on the internet,” Bacon said. People need to get something out of participating in the community that they can’t get anywhere else. “People love the community aspects,” said Ketan Umare, co-founder and CEO of Union.ai, the company behind Flyte, an open source workflow automation platform for data and machine learning processes, and his experience building a community with a value proposition above and beyond the project’s value. “We guarantee you that in the community, there is somebody to listen to your problems,” Umare said. “It creates this feeling that you are not alone.”


What SREs Can Learn From Capt. Sully: When To Follow Playbooks

What’s interesting about Sully’s story is that he didn’t do exactly what pilots (or engineers) are trained to do. He didn’t stick completely to the playbook that a pilot is supposed to follow during engine failure, which stipulates that the plane should land at the nearest airport. Instead, he made a decision to crash-land in the Hudson River. The fact that Sully did this without any loss of human life turned him into a hero. In fact, Sully the movie almost villainizes the National Transportation Safety Board (NTSB) for what the film presents as an unfair investigation of Sully for not sticking to the playbook. Yet, as the podcasters noted, the difference between heroism and villanism for Sully may just have boiled down to luck. They pointed out that in similar incidents – like the Costa Concordia sinking in 2012 – in which staff deviated from playbooks, they ended up facing stiff penalties. In the Costa Concordia case, the captain of the boat was placed in jail – despite the fact that his decision not to stick rigidly to the playbook most likely reduced the total loss of human life.


The truth about VDI and cloud computing

Performance is the core problem. Not all home-based Internet connections support high speeds and low latency. Indeed, even if you pay for the faster stuff, a few days of detailed monitoring will show that latency and speed are pretty bursty overall. VDI, depending on what you’re leveraging, indeed keeps data and applications centrally located and thus hopefully secure. But both application images and data must be constantly transmitted to the employees’ devices and interactions transmitted back to the virtual servers. They are very chatty. This is unlike applications that run locally and have data stored locally, where the response is nearly instantaneous. Most of us are used to this kind of performance. Latency, even if it’s not noticeable by most remote workers, can add up to productivity losses that run into many millions of dollars a year. Many of the savvier remote workers have worked around the performance issues by moving some of the data to local storage on their devices (such as with email), thus causing a potential security problem if the device is hacked or stolen.


Ukraine: How to protect yourself against cyberattacks

Experts say they are currently more concerned with institutional rather than personal cyber hacks. But attacks on individual accounts owned by private citizens, who work for institutions that handle sensitive information, are still a risk. "People who are not wary are often the weakest link and the foot in the door for cybercriminals looking to stage a larger attack on critical infrastructure," Rachel Schutte, an IT and cybersecurity manager based in Germany, told DW. This was the case for European government personnel involved in assisting refugees fleeing Ukraine. They received phishing emails — or messages aimed at collecting sensitive information — from a Ukrainian armed service member’s compromised account, she said. In response to increased instances of cyberattacks aimed at employees of high-profile organizations, Deutsche Welle has also asked employees to ramp up security on personal social media accounts. ... Cloud-based services distribute distinct functions across data centers in multiple locations, fueling a race towards interconnected networks. 


Finance firms scrape alternative data from unexpected sources

In light of the "Great Resignation" and unprecedented job mobility in part sparked by the pandemic, such data about job happiness is "top of mind for investors today," Lopata said. Another timely use for alternative data is tracking how inflation in the U.S. is disrupting markets. Thinknum is following used car sales on CarMax and Carvana, two of the big auto sales apps. "We're tracking all that data in real time down to a VIN number, so that allows you to understand whether prices are peaking," Lopata said. "Beyond just tracking the peaks … we're tracking when the peak ends." "We're able to identify that in January '22, we finally started to see some decrease in pricing," she added. Other current market trends for which Thinknum is digging up alternative data include changes in the food delivery services business and cryptocurrency price fluctuations, where the vendor has discovered that GitHub, the provider of internet hosting for software development, is a prime source of data.


US Officials Push Collaboration, AML Controls for Crypto

According to Conklin, the Treasury Department has for a decade targeted the assets of Russian elites - dating back to the country's first invasion of Crimea in 2014. "So we do know a little bit about how this regime likes to evade sanctions and move money, and we have a significant toolkit at our disposal now to tackle that," he said. "The regime does like to layer its assets and move money. They have a long and extensive playbook to launder money, and at the center of their playbook is their web of international corporate registration and the use of foreign companies and foreign persons. They're also really adept at conversion to other assets, including gold and foreign currencies." And so, asked whether crypto will be a part of its workaround, Conklin said: "Certainly, there's going to be an element. That's part of the playbook, but it frankly isn't at the top of their list." He also referenced Treasury's sanctioning of the Russian crypto exchange Suex in September 2021 as an example of "how sanctions can work in the crypto ecosystem"


Gartner: Public sector must target disjointed IT strategy

Mickoleit recommended that public sector IT chiefs “zoom out” to enable them to look at how technology investments can be aligned with policy objectives. As an example of joining up IT with policy, he said it is impossible to provide high-quality public sector services without the concept of digital identity, which needs to link across different tech infrastructure and public sector bodies. Another aspect of the pandemic was that having “good enough” processes is not sufficient, said Mickoleit. “Just working isn’t enough. There were huge scaling issues, families and businesses in need.” He warned that such a situation is not sustainable when there is a disruption. “There is a need for efficiencies in government,” he added. This means IT leaders need to focus on reducing the number of process steps to support case work and deliver a service to a citizen, said Mickoleit. “There is an ideal opportunity to combine AI and automation for better support,” he pointed out.


How to Become a Data Governance Lead

A significant problem facing businesses implementing a Data Governance program is the realization that raw data is often not analysis-ready. The data may be badly organized, unstructured, or has been stored in separate databases. The data has to be cleaned and standardized before the Data Governance program can move forward. Developing a Data Governance program might require a fair amount of manual labor, but after the data has been standardized, incoming data would be sent automatically to the appropriate location, and in the correct format. Data silos are a slightly different problem for Data Governance programs. Data can be stored in silos and treated as though certain teams or individuals own it — and they sometimes don’t like to share. Additionally, different departments may use entirely different systems, making standardization especially difficult. These same departments may have no real understanding of their data’s value. Data Governance will support a framework allowing access to their data, breaking down the silos.



Quote for the day:

"A leader's dynamic does not come from special powers. It comes from a strong belief in a purpose and a willingness to express that conviction." -- Kouzes & Posner

Daily Tech Digest - March 05, 2022

UK data watchdog urges vigilance amid heightened cyber threat

Asked about the potential for a Russia-Ukraine cyber conflict spreading to the UK, Edwards said: “We have picked up on that heightened threat environment and we think it’s really important to take the opportunity to remind businesses of the importance of security over the data that they hold. This is a different era from blacking out the windows and keeping the lights off. The threats are going to come in through your inbox.” Edwards said that outside the Ukraine conflict and the warnings it had brought of a heightened security threat, the Information Commissioner’s Office had seen a “steady and significant” increase in cyber-attacks against UK businesses over the past two years. Between July and December last year the ICO recorded 1,345 “cybersecurity incidents”, including ransomware attacks, where assailants demand payment in cryptocurrency to decrypt a target’s computers, and phishing attacks, where the victim is tricked, often via email, into downloading malware or handing over their login details. 


10 enterprise AI trends for 2022

After companies are successful at initial proofs of concept, they often build AI centers of excellence to operationalize the technology and build talent, expertise, and best practices. But once a company reaches a level of critical mass, then it makes sense to break up some of these centers of excellence and federate AI, moving experts directly into the business units where they are needed most. “For those companies that are less mature, there is value in having a center of excellence that is housing talent and learning across the institution,” says McKinsey’s Singla. “Without that, companies usually don’t have the ability to scale. Talented people want to be with other like-minded people. And less experienced people benefit from being in a center of excellence because they can grow or learn.” Distributing them too early would dilute their impact and reduce a company’s ability to iterate and duplicate successful projects across multiple business lines. “But as you get to a layer of maturity and scale, longer-term, the benefit of technologists having both a deep AI expertise and domain expertise is a real home run,” he says.

We Aren't Sure If (Or When) Artificial Intelligence Will Surpass the Human Mind

If and when AI reaches the point where it can continually improve itself, the fate of our species could depend on the actions of this superintelligent machine, warns Nick Bostrom, a University of Oxford philosopher, in his book Superintelligence: Paths, Dangers, Strategies. Yet that fate might not necessarily be a dismal one. The experts also point out that superintelligent AI could offer a solution to many of our problems. If we can’t figure out how to tackle climate change, eradicate poverty and ensure world peace, perhaps AI can. “This remarkable technology has the potential to help everybody live healthy, wealthy lives so humanity can flourish like never before,” says Tegmark, who is also the founder of the Future of Life Institute, an organization that aims to ensure these positive outcomes. Yet, he adds, it “might wipe out humanity if its goals aren’t aligned with ours.” Or as Bostrom put it in Superintelligence, when it comes to confronting an intelligence explosion, “We humans are like small children playing with a bomb.”


Researchers Devise Attack for Stealing Data During Homomorphic Encryption

The attack technique that the researchers at NC State developed involves a vulnerability in a Microsoft implementation of fully homomorphic encryption called Microsoft Simple Encrypted Arithmetic Library (SEAL). Microsoft SEAL is a collection of encryption libraries for performing computing operations on encrypted data. The vulnerability, which the researchers have described as a "power-based side-channel leakage" is present in the SEAL homomorphic encryption library through version 3.6 of the technology, according to the researchers. It enables attackers to use a single power measurement from the device doing the encryption operations to extract data in plaintext while the data is being homomorphically encrypted. The vulnerability allows attackers to listen to the machine doing the encryption and infer if a 0 bit is being processed or a 1 bit, Aysu says. "It's a few lines in the software code that give out the data being executed on the device," he says. "This information allows us to use some fancy equations and figure out the secret messages being encrypted in a homomorphic encryption scheme."


How Kubernetes and Database Operators Drive the Data Revolution

Big, centralized application backends just could not provide the flexibility required to scale from thousands to millions of requests per second. Most of us probably can remember at least a couple of cases when “monolithic” web applications were experiencing severe performance issues after going viral. The solution to this problem came from adapting an approach where organizations split these monoliths into smaller “micro” services running on docker containers that can be horizontally scaled both independently of each other and much quicker than monoliths. With each microservice adding to the demand on development operations, this strategy, however, wouldn’t be so successful without container orchestration frameworks like Kubernetes. Introduced publicly in 2014, Kubernetes, formerly known inside Google as Borg, quickly proved itself as a top choice for automating deployment workflows and today is one of the industry standards for modern development operations. Also, being an open source, cloud native component, Kubernetes continues to evolve and improve. 


The best is yet to come: What’s next for blockchain and the creator economy

The rise in creative energy has inspired the developer community as well. New niche streaming platforms have grown up, helped by the emergence of low-cost decentralized infrastructure that allows application builders to encode video, store data and handle identity without having to pay expensive centralized cloud providers for such services. These centralized providers will increasingly find themselves on the defensive. Two attention-grabbing incidents in 2021 are illustrative: Hackers attacked Twitch and released private information about its code and its users to the world. And, Facebook suffered colossal reputational damage from a lengthy outage and whistleblower claims that its management has repeatedly chosen to prioritize profit over safety. Big Tech’s woes and pandemic-related restrictions have sped up fundamental changes already underway in how the world produces, consumes and uses video content — changes likely to propel growth in the creator economy well into the future. 


Harvard census identifies most commonly used open source packages

The census is broken down into eight ranked lists. Four include version numbers and four are version agnostic. Packages that use the default JavaScript npm package manager have been split out from non-npm packages. There are also separate lists for packages that are directly called by developers versus those that are indirectly called as dependencies, bringing attention to the kinds of deeper dependencies that are more difficult for developers to observe within their environments. These lists “represent our best estimate of which FOSS packages are the most widely used by different applications, given the limits of time and the broad, but not exhaustive, data we have aggregated,” the report notes. While the census does not attempt to identify the riskiest OSS projects, it does note that “measuring risk profiles is a separable task, and it’s easier to do it once the most widely used software is identified.” That work will require cross-industry effort and will depend on the individual risk profile of the consuming organization.


Multi-Chain Evolution: Why Blockchain’s Future is Interoperable

One project with interoperability at its heart is Spherium. Not only has the startup developed a cross-chain bridge, but its incubation program HyperLaunch facilitates the seamless entry of innovative blockchain projects into the cross-chain ecosystem. Spherium also has its own cross-chain DEX, which supports trading among tokens between EVM and Non-EVM compatible chains. To date, Spherium has partnered with leading NFT, DeFi, P2E, and general blockchain projects. The alliance seeks to “expand the multi-chain experience for creators and buyers to bridge major blockchains and add certain defi functionalities into their platform.” Through HyperLaunch, Spherium offers projects the opportunity to integrate a bridge solution into their core functionality and enable the deployment of their native tokens on different networks. With the audited Spherium bridge, token swaps can take less than one minute currently and is free to use for projects in the HyperLaunch program. Geared towards defi, NFT and meta-world platforms, the incubation program offers more than just bridge access; staking and dual-farming solutions are also provided, along with extensive technical support.


Finance firms scrape alternative data from unexpected sources

In light of the "Great Resignation" and unprecedented job mobility in part sparked by the pandemic, such data about job happiness is "top of mind for investors today," Lopata said. Another timely use for alternative data is tracking how inflation in the U.S. is disrupting markets. Thinknum is following used car sales on CarMax and Carvana, two of the big auto sales apps. "We're tracking all that data in real time down to a VIN number, so that allows you to understand whether prices are peaking," Lopata said. "Beyond just tracking the peaks … we're tracking when the peak ends." "We're able to identify that in January '22, we finally started to see some decrease in pricing," she added. Other current market trends for which Thinknum is digging up alternative data include changes in the food delivery services business and cryptocurrency price fluctuations, where the vendor has discovered that GitHub, the provider of internet hosting for software development, is a prime source of data. "We've been looking at where we can find a signal before it hits the market," Lopata said.


How AI and Analytics Help Organizations Deliver a Better Customer Experience

Technology that allows companies to analyze and deliver data where it’s needed and at the right time is crucial to producing a better customer experience. To achieve this, companies need a data architecture that provides a highly granular view of their customers. The architecture must be flexible and adaptable, so new data can be incorporated and changes can be made without causing major technological upheavals. The problems we face chasing omniscience about customers are constantly evolving. The tools and solutions evolve to keep pace. The data architecture we implement must handle these changes in a forgiving manner. Additionally, the architecture must make data available to developers, analysts, and other staff to use whenever they need it in a way that protects the integrity and security of the individual data event. A data fabric brings together data of all forms — from the edge to the cloud — and provides services for discovery, governance, quality, and transformation. This architecture truly puts data to use at scale to improve customer experiences.



Quote for the day:

"Don't let a bad day make you feel like you have a bad life." -- Joubert Botha

Daily Tech Digest - March 04, 2022

How the C-Suite can set the organisation up for metaverse success

In order to reap the benefits that the metaverse offers, the most important thing leaders need to do currently is to build capability and thinking within their organisation. We are currently in the exploratory phase of the metaverse, so their needs to be an encouragement for employees to develop a greater awareness and adopt a more inquisitive mindset. Hiring new talent is going to be imperative to acquire new skills and expand that knowledge across teams to enable an incubator mentality. The metaverse will allow employees to solve old problems in new ways, but leaders will need to give creatives the space and independence to trial different ideas in order to find the best one that suits the business strategy and generate value. To strike this balance, I would recommend creating a separate team outside the IT department, yet within the matrix of the business. This team doesn’t have to be in-house – it could be outsourced. But it must be plugged into your marketing, sales and product development teams so that they have an understanding of the evolving business strategy.


All You Need to Know About User Session Security

Session security is an important consideration in the design of any system that requires communication between a server and a client. Improper security can lead to user accounts being vulnerable to unauthorized access. OWASP (Open Web Application Security Project  —  leading authority for security) considers the improper implementation of authorization/authentication as the second biggest risk to application security. Several notable hacks illustrate this point: The Docker hub database hack earlier this year resulted in stolen Github access tokens; Gitlab had a vulnerability where all its user’s auth tokens were exposed in the URLs, had no expiry time, and were susceptible to brute force attacks due to their short length; A software bug made it possible to steal access tokens  —  affecting 90 million Facebook accounts. It is tricky, time-consuming, and expensive to correctly implement user session management. According to an a16z operating partner (top tier VC) and former Box CSO, "authentication and authorization is the number one spending cost for organizations when it comes to their security budget."


Edgeless Systems Brings Confidential Computing to Kubernetes

Edgeless argues that by using Constellation to leverage confidential computing you can isolate and runtime-encrypt entire Kubernetes deployments. This means, the company states, Constellation enables you to use the public cloud like your private cloud. “Confidential Computing will usher in a new era of securing data in the cloud. With our unique expertise, we are making this new technology accessible to enterprises at scale,” said Felix Schuster, a confidential computing pioneer and Edgeless Systems co-founder and CEO. “Processing data that is always encrypted — not only at rest and in transit but also while in use — is a difficult task.” But Constellation makes it happen. This addresses the old, familiar worry of “Is your data actually safe on a third-party public cloud.” Edgeless claims that it is now since your data is securely encrypted within the cloud’s RAM. Now, as you’d guess, securing and using encrypted data isn’t easy. Constellation addresses this by taking care of such complexities as the verification or attestation of confidential virtual machines (VMs) and workloads, secure connections, key management, and data encryption.

 

Digital transformation: 3 IT culture traps that can hold you back

To accelerate your organization’s digital transformation, first ensure that your business and IT teams have open conversations about business issues and how IT can solve them. If an IT team feels that it’s being dumped on rather than treated as a strategic partner, an “us vs. them” culture can easily develop. The inverse is also true: IT teams cannot simply drop new technology solutions on unsuspecting business teams, believing they’re solving issues that the latter may not even realize exist. Teams should also have clearly defined roles, and they need to align on a common language. For instance, terms such as “production” and “test” mean different things for each team. When both teams try to understand each other, they can find synergy – which is the first step toward success (and creating robust lines of communication should be priority one). Achieving synergy shows that digital transformation is not merely a process of bringing in new technology to create change; it’s about redefining how the organization operates and communicates.


Decarbonising datacentres: Turning the hot air about heat reuse into real-life use cases

As with any new technology, there are a number of challenges that need to be overcome first to take full advantage of the opportunity and promise of heat reuse. One of them is the proximity to the heat demand. For example, the closer a datacentre is to homes or businesses in need of heat, the better. This could be achieved with government support and policy-making in the UK, as many European Union member countries have already done. Where datacentres cannot be situated near existing heat demand, applications such as greenhouses or sustainable farms could be purposely located there to make use of this low-carbon, low-cost energy, which is a path Norwegian colocation firm Green Mountain is pursuing. The captured heat also has to have a price in order to secure the return on investment, and to let the competition drive the change. There are a number of facilities around the world already demonstrating the feasibility of installation, environmental benefits and enormous savings resulting from recovering waste heat.


Can we and should we have fully open APIs?

To consider the limitations of open APIs, it is essential to also take into account the issues with API itself. According to Salt Edge’s survey of over 2,000 API initiations with banks from 31 European countries, 38% of bank APIs don’t meet EU or UK regulatory standards. It was also revealed that 43% of banks did not support automated registrations to access the relevant APIs, 22% had faulty documentation, and 28% had downtimes during the integration. A closed API is not accessible openly and typically resides in highly secure settings. But with Open APIs, a recent report from Transparency Market Research showed that, in terms of security, uncertainty and vulnerability of the third-party apps is one of the major restraining factors that affect the market. According to experts, a good open API requires basic considerations like the choice of selecting data format (JSON, XML, Text, VML, etc.), the protocol (HTTP, HTTPS), and the version of API. It is also important to consider security, be it API authentication or HTTPS. 


How to Navigate the Complexity of SaaS Management

SaaS offers tremendous value to organizations, but businesses need an easier path to rein in SaaS complexity. What’s the way forward? Adopting a comprehensive approach to SaaS management that solves IT, security, risk and finance teams’ challenges by giving them a single source of truth into every SaaS application is the answer. A modern approach to SaaS management enables stakeholders to: Discover both known and unknown SaaS applications, providing complete and actionable visibility into all data types and interconnectivity flows; Uncover and mitigate various security risks that put sensitive customer and business data at risk; Deliver the insights on user access and app utilization needed for better IT management and cost optimization across all SaaS apps; Streamline SaaS compliance reporting A comprehensive SaaS management solution that addresses risk management and business value for SaaS in one place for all stakeholders is imperative for business success.


Using Emergence and Scaffolding to Address Complexity in Product Development

Some would argue that known knowns should be simple to deal with and easily elaborate but even here caution is needed. I can recall working on the development of a new card settlement system where we needed to be able to deal with blacklisted cards. The assumption was that a card would be either black listed or not but we were advised that the current system could return ‘yes’, ‘no’, or a ‘maybe’ and no one could explain the latter. We had made the mistake of assuming this was clear and obvious but it really was a complicated issue and resolution was both time-consuming and costly. We have a large number of experiences addressing the second type of need: known unknowns and you could argue that agile practises accommodate articulation of these needs and related practises such as innovation games help here. This is broadly the case and iterative development is helpful as it allows us to articulate these elements and to incorporate them.


Are Blockchains Databases?

We’re going to see database and blockchain technologies continue to grow towards one another. On the blockchain side, the reasons are obvious: database feature sets and the operational and cost expectations of businesses for these use cases have emerged over many decades for good reason, and those reasons aren’t going away. Commercial blockchain solutions will have to meet those business expectations or die trying. The pressure on databases to adopt blockchain capabilities are a little more subtle. A bit of this is healthy competition — what customer doesn’t want to be able to time travel through older versions of data, sleep better knowing their data is tamperproof, or eliminate the overhead of application logs by integrating lineage directly into the data model itself? But beyond that, database users are also operating in increasingly regulated environments: GDPR and CCPA/Prop 24 have made it necessary to control PII not just within a company, but across companies. Financial and other regulations require tracking anything that involves money or credit more carefully than ever before, including being able to audit what was changed, when, and by whom. 


Regulator Announces Border Gateway Protocol Security Review

So it's welcome news that after security experts for years have been warning that BGP remains poorly secured and poses a risk, the U.S. government is finally taking a closer look. The Federal Communications Commission on Monday announced that it has begun a security review of BGP and will soon open a 30-day period for comment. The timing of the FCC's call for comments is auspicious, not least from a national security perspective, given that Russia reportedly used BGP hijacking against Ukraine last week. "There's nothing like a war to get them listening," Woodward says. Ukraine's computer emergency response team, CERT-UA, reported seeing BGP hijacking attacks less than 24 hours before Russian troops invaded on Feb. 24, at the same time Ukrainian government and banking systems were being hit by distributed denial-of-service attacks. "Around the same time of the DDoS attacks … CERT-UA asserted that there was a BGP hijacking attack against a Ukrainian bank," Cisco Talos says. 



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith

Daily Tech Digest - March 03, 2022

Multifactor Authentication Is Being Targeted by Hackers

Proofpoint found today’s phishing kits range from “simple open-source kits with human-readable code and no-frills functionality to sophisticated kits utilizing numerous layers of obfuscation and built-in modules that allow for stealing usernames, passwords, MFA tokens, social security numbers, and credit card numbers.” How? By sending phishing emails with links to a fake target website, like a login page, to naive users. That, of course, is old news. Hackers have been using that technique for ages. What this “new kind of kit” brings to the table is a malware-planted MitM transparent reverse proxy. With this residing on the target’s PC, it intercepts all the traffic including their credentials and session cookies even if the connection is to the real site. ... One such program, Modlishka, already automates these attacks. Polish security researcher Piotr DuszyÅ„ski, said of it, “With the right reverse proxy targeting your domain over an encrypted, browser-trusted, communication channel one can really have serious difficulties in noticing that something was seriously wrong.”


How to choose a cloud data management architecture

Multi-cloud models incorporate one of more services from more than one cloud provider (and optionally may include on-premises or hybrid architectures). In this scenario, the difference is that services from multiple cloud providers are used. A DBMS offering and the applications that rely on it may be deployed both on-premises and/or on one or more clouds. As such, all of the considerations of hybrid cloud may apply with the added considerations of deploying software in multiple cloud environments. These offerings have historically been limited to independent software vendors (ISVs) rather than native CSPs, as the ISVs have more of a vested interest in making sure that their software runs in as many environments as possible. However, cloud service providers are increasingly engaging in multi-cloud and intercloud scenarios. The multi-cloud scenario generally appeals to end users who are concerned about cloud vendor lock-in and want to be able to move their applications easily to a different cloud provider.


How blockchain investigations work

Knowing the exact entity behind a batch of addresses can be crucial, and blockchain intelligence companies have ways of finding that. They aggregate information from multiple sources, often using off-chain data to enrich their understanding of transactions. They look at dark web forums, social media posts, and court papers among others. "You can be on Facebook, and you see [someone] soliciting funds in bitcoin and there's an address there," Redbord says. That address is copied and can be associated with a cybercriminal ring, a terrorist organization, or other illicit entities, depending on the case. Such nuggets of information are gathered by blockchain intelligence companies and stored for future references. "[We] are building a giant blacklist of cryptocurrency addresses," Redbord adds. This process of categorizing addresses is done in the background. Investigators using blockchain intelligence software simply input the address corresponding to the payment. Then, they can see the flow of digital money.


Will AI Ever Become Ubiquitous?

We’re entering an era where our personal data will be more valuable than ever, and consumers are beginning to wake up to that fact. A report in 2019 indicates over 60 percent of respondents felt connected devices were “creepy,” which will likely slow adoption of such devices. While all of this may sound daunting, there are some interesting innovations addressing the pain points. And you’re likely enjoying the benefits of this thinking without even realizing it. To understand, we have to go into a room filled with networking gear. Most of us are familiar with server rooms thanks to TV shows and movies where we see some generic, but high-tech, “data center.” What most consumers don’t realize is that companies don’t just upgrade all their data center hardware at once. Just as you likely don’t buy a new router when you buy a new laptop, data center components are swapped out over time, here and there, and can wind up as a patchwork of vendors and services. Some time ago, network administrators unified their management while allowing underlying systems to micro-manage the individual components.


IoT Deployment – How to Secure and Deploy Internet of Things Devices

Many IoT devices are connected to the internet and can be accessed by hackers from anywhere in the world. This makes them ripe for attack. Hackers can exploit vulnerabilities in these devices to gain access to sensitive data or even take control of them. Another issue is that many IoT devices are not well-integrated into existing IT security frameworks. As a result, they may not be properly protected against cyber threats. For example, many IoT devices lack adequate firewalls and intrusion detection systems, making them susceptible to attack. Finally, there is also a risk that malicious actors could weaponize IoT devices for use in DDoS attacks or other cyberattacks. For example, hackers could exploit vulnerabilities in smart TVs or other internet-connected devices to launch a devastating DDoS attack against a company or organization. To mitigate these security problems, organizations should take steps to secure their IoT devices properly. They should ensure that all devices have strong passwords and are routinely updated with the latest security patches.


The Cloud Challenge: Choice Paralysis and the Bad Strategy of “On-Premising” the Cloud

Here is the troubling fact: most organizations know that the cloud is different than on-prem, most of them also know the main differences. Yet, this knowledge doesn’t translate into better solutions. That is because most organizations face a challenge: "With all these cloud services out there, which one to use in each scenario?" Too many choices can lead developers/architects to some kind of decision paralysis. Instead of going through the many choices, they just resort to the most familiar. In the case of organizations who are used to building on-prem, this often means choosing the old-stack without even considering the alternatives. Having tens of cloud services is indeed a challenge (Azure has 400+ different services at the time of writing this, each service might have tens of built-in capabilities). However, it is still a good challenge to have. That is because if you’re not dealing with resolving this challenge you’re effectively dealing with the challenge of how to make the cloud behave like on-prem.

 

Software development is changing again. These are the skills companies are looking for

Today, good developers work across the stack – in fact, their success relies on their ability to engage with a range of stakeholders to deliver business outcomes, says Spencer Clarkson, chief technology officer at Verastar. "I think what makes a good developer nowadays is that rounded understanding," he says. "They need to be agile in working style, and also understand the concept of doing Agile development – fail fast, develop quickly." That's something that others recognise, too. Tech analyst Forrester says Agile delivery is critical to successful digital transformations, yet the best enterprises go even further. ... "Software development is now much more about gluing things together rather than building something from scratch," he says. "There's lots of good apps and products out there. It's how you glue them together – that's your IP. People need to have that aptitude first and be multiskilled second." Gartner also says organisations and their employees should be prepared to move in multiple strategic directions at once due to the ongoing requirements for innovation and digitisation.


Comparing Programming models: SYCL and CUDA

SYCL and CUDA serve the same purpose: to enhance performance through processing parallelization in varied architectures. However, SYCL offers more extendibility and code flexibility than CUDA while simplifying the coding process. Instead of using complex syntax, SYCL enables developers to use ISO C++ for programming. Unlike CUDA, SYCL is a pure C++ domain-specific embedded language that doesn’t require C++ extensions, allowing for a simple CPU implementation that relies on pure runtime rather than a particular compiler. SYCL is a competitive alternative to CUDA in terms of programmability. With SYCL, there’s no need for a complex toolchain to develop an application, and the tools ecosystem is readily available, ensuring a hassle-free development experience. SYCL doesn’t need separate source files for the host and device. Instead, you can find the code for the host and the device in the same C++ source file. SYCL implementations are capable of splitting up this source file, parsing the code, and sending it to the appropriate compilation backend.


Ban predictive policing systems in EU AI Act, says civil society

As it currently stands, the AIA lists four practices that are considered “an unacceptable risk” and which are therefore prohibited, including systems that distort human behaviour; systems that exploit the vulnerabilities of specific social groups; systems that provide “scoring” of individuals; and the remote, real-time biometric identification of people in public places. However, critics have previously told Computer Weekly that while the proposal provides a “broad horizontal prohibition” on these AI practices, ...”. In their letter, published 1 March, the civil society groups explicitly call for predictive policing systems to be included in this list of prohibited AI practices, which is contained in Article 5 of the AIA. “To ensure that the prohibition is meaningfully enforced, as well as in relation to other uses of AI systems which do not fall within the scope of this prohibition, affected individuals must also have clear and effective routes to challenge the use of these systems via criminal procedure, to enable those whose liberty or right to a fair trial is at stake to seek immediate and effective redress,” it said.


IT leadership: 3 new rules for hybrid work

The very nature of the annual review sets up a dynamic where the manager critiques and the employee is on the defensive. The employee often feels that the manager focuses solely on shortcomings and not on achievements. They may wonder, “Why didn’t my manager mention this issue when it actually happened?” or “Why won’t my manager recognize the things I’ve done right?” The manager may be new to the position and not entirely familiar with the employee, their position, or work history, making a constructive review more challenging. In addition, many managers simply are not trained to communicate, coach, and lead effectively. With higher numbers of employees working remotely, reviews have an added layer of difficulty especially if they aren’t done in person. Body language can be harder to read. Without seeing the employee in action day-to-day, the manager might not be aware of how productive they are. Zoom fatigue can also cause many employees to remain silent rather than actively participate.



Quote for the day:

"Leadership is about carrying on when everyone else has given up." -- Gordon Tredgold

Daily Tech Digest - March 02, 2022

7 mistakes CISOs make when presenting to the board

“Board meetings are not a great place for surprises,” says James Nelson, vice president of information security at Illumio, and CISOs need to avoid being caught off guard by questions they can’t answer. “Preparation should include not just generating the content in your slides, but also thinking about what questions the board will potentially ask you and considering your answers ahead of time.” Nelson advises apprising any executive team attendees of both your prepared material and the questions you think will be asked, as well as how you plan to answer them. “They will know you can’t guess them all, but the process can help build trust,” he adds. ... A boardroom is not the place to unburden yourself, although it can be tempting when you feel the collective burden of everyone’s risks on your shoulders, says Watts. “Don’t be the prophecy of doom, and be very careful when using fear, uncertainty, and doubt (FUD) as a weapon of leverage—it can come back to bite you.” Instead, explain why you think a problem exists, and follow that with solution options, your recommendations, and their associated benefits, Watts continues. “Do this as a package.”


InfluxDB as an IoT Edge Historian: A Crawl/Walk/Run Approach

The question of how to get data into a database is one of the most fundamental aspects of data processing that developers face. Data collection can be challenging enough when you’re dealing with local devices. Adding data from edge devices presents a whole new set of challenges. Yet the exponential increase in IoT edge devices means that companies need proven and reliable ways to collect data from them. The following are three different approaches to collecting data from edge devices. Edge devices have different capabilities — processing power, memory capacity, connectivity, etc. — so finding the right solution for your use case may require a bit of trial and error. However, you can use these approaches as a jumping-off point for building your solution. For context, we’re using InfluxDB as the processing and storage solution, and the cloud version of InfluxDB is the target destination here. Each edge device in these examples also runs the open source version of InfluxDB. We’re using the Flux language to create tasks that perform data transformations and annotations.


Introducing Ballast: An Adaptive Load Test Framework

As Uber’s architecture has grown to encompass thousands of interdependent microservices, we need to test our mission-critical components at max load in order to preserve reliability. Accurate load testing allows us to validate if a set of services are working at peak usage and optimal efficiency while retaining reliability. Load testing those services within a short time frame comes with its unique set of challenges. Most of these load tests historically involved writing, running, and supervising tests manually. Moreover, the degree to which tests accurately represent production traffic patterns gradually decreases over time as traffic organically evolves, imposing a long-term maintenance burden. The scope of the load testing effort continuously increases as the number of services grows, incurring a hidden cost to adding new features. With this in mind, we developed Ballast, an adaptive load test framework that leverages traffic capture using Berkeley Packet Filter (BPF) and replays the traffic using a PID Controller mechanism to adjust the number of requests per second (RPS) to each service. 


Why Israel's Ministry of Defense is moving to the public cloud

The Tel Aviv-based engineering head of the MOD's cloud initiative, who asked that his name not be published for his own security purposes, explained the reasoning behind the changeover. "So, we are a very conservative organization, as to say, we have sensitive information, various sensitivity and classifications, and most of the data processing we do on an on-premise network," the MoD Infrastructure Cloud Group Leader told ZDNet. "But the data grows, and we (now) can just grow with it. So when we go to a public cloud, we want to address our ever-growing compute needs. And the second level is the (distribution) of services -- hundreds and even thousands of software services. So for us, it is in essence, a digital transformation. We can't achieve what we need by staying at home on our on-premise networks." Using the Anjuna Confidential Cloud software, the MoD is now able to achieve public cloud scale, agility, and maximum data security immediately, without having to recode or refactor applications, the MoD project head said.


CISO Checklist for Offboarding Security Staff

"As companies deal with increased rates of employee turnover, they must also consider the fact that highly skilled ex-employees are leaving with key institutional knowledge and confidential information," warns Todd Moore, global head of encryption products at Thales, a France-based multinational provider of electrical systems and services for the aerospace, defense, transportation, and security markets. "This potentially increases the risk of data breaches and other cyber incidents, which is further amplified when data organization and protection is overseen by human managers." Leave nothing to chance or oversight by working with a checklist instead. "CISOs should already be monitoring and updating the access rights of all employees and manage administrator access periodically and have a list of tasks and procedures in place when employees leave," says Ahmad Zoua, senior project manager at Guidepost Solutions, a global security, compliance, and investigations consulting firm.


10 key ESG and sustainability trends for business, IT

CIOs have an important role in the growing concern for sustainability and other social conscious issues. "We live in a more technology-enabled and technology-dependent world than ever before, leaving CIOs with a great opportunity and an enormous responsibility," said Jahidul Khandaker, senior vice president and CIO of Western Digital, a U.S. computer hard disk drive manufacturer and data storage company, headquartered in San Jose, Calif. "CIOs must balance ... new [market] demands with how we respond to critical issues facing the world today, especially around the environment." Being proactive in these areas is critical. "Every enterprise is on the pathway to net-zero whether they have decided this for themselves at this point or not," Mingay said. "The only choice they have left is whether they want to lead, follow or get drawn in kicking and screaming." Regardless of how companies choose to engage, CIOs will have different roles, depending on those initiatives, Mingay said. Those roles can range from supporting leaders in other departments with the right information to taking on a more direct role in managing sustainability transformation, much like other digital transformation projects.


Avoiding the Chaotic 5G Rollout at Airports

The similarities between the C-band frequencies and those used by radio altimeters can lead to interference with these radio altimeters receiving the appropriate radio waves, resulting in the following risks: Risk of aircrafts’ engine and braking systems not transitioning to landing mode and therefore preventing an aircraft from stopping on the runway; Risk of the altimeter not being able to receive the waves or being able to distinguish between the waves that it is expecting to receive and other nearby waves, thereby giving the wrong reading or not functioning at all. The risks listed above could result in situations such as those of the two fatal crashes of the Boeing 737 Max plane in Indonesia and Ethiopia, which killed 346 people. The US Federal Aviation Administration (FAA) and airlines have shown concerns about these risks, which have led to wireless carriers that purchased 5G frequencies via the Federal Communications Commission (FCC) 5G Spectrum Auction and are implementing the 5G rollout (Verizon and AT&T) stating that they would delay the expansion of new 5G cellular service near some airports in order to avert damaging disruptions in airport operations.


Behavioral Analytics is getting trickier

Although most enterprise CISOs are fine with behavioral analytics on paper (on a whiteboard? As a message within Microsoft Teams/GoogleMeet/Zoom?), they're resistant to rapid widespread deployment because it requires creating a profile for every user — including partners, distributors, suppliers, large customers and anyone else who needs system access. Those profiles can take more than a month to create to get an accurate, consistent picture of each person. I hate to make this even worse, but there are now arguments that security admins don't need one profile for every user, but possibly dozens or more. Why? ... You now have a behavioral profile of that user. That profile, however, is likely based on the user’s regular behavior during normal workdays. What about when that user is exhausted, say possibly after arriving in the office from a red-eye flight? Or ecstatically happy or horribly depressed? Do they behave differently in an unfamiliar hotel room compared to the comfort of their home office? Do they act differently after their boss has screamed at them for 10 minutes?


Software development coaching dos and don’ts

Going one step beyond empathy requires software development managers to recognize the symptoms of people burning out. Signs of burnout include decreased productivity, increased cynicism toward colleagues, and a sense of detachment from the company. Dawn Parzych, manager of developer marketing at LaunchDarkly, believes that development teams can reduce stress by utilizing devops tools and practices. She shared a recent study showing that 91% of software development professionals who lack processes, such as using feature flags, report feeling stressed during deployments. She suggests, “As a manager, look to how you can remove stress and help your team members avoid burnout by improving build and deploy processes through the use of chaos days, observability, or feature flags.” ... Development managers should remind software developers that they don’t need to reinvent the wheel and code solutions from scratch all the time. There’s a wealth of software as a service, open source, cloud services, and low-code solutions available for developers to leverage.


Agile transformation: 5 ways to measure progress

In Agile workplaces, silos are broken down in favor of collaboration, communication, and transparency. To determine how well this is happening in your organization, assess the structures being put in place across projects. The presence of product owners in each of your scrum teams is a good starting point. A regular conversation with the product owners and scrum leaders can help you assess if the hierarchies are breaking down in favor of a more synergistic approach. Consider joining a few standup calls as an observer to get a first-hand understanding of how the development of a specific feature or assignment is moving between product owners, development teams, and quality assurance owners. A new business strategy can also be evaluated in terms of employee buy-in. If team members believe in the value and importance of Agile transformation, they will work harder to ensure its success. But if a critical mass of employees is skeptical about the change, they will make it harder to see a positive result.



Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher

Daily Tech Digest - March 01, 2022

Using APIs with Low-Code Tools: 9 Best Practices

One of the best things about low- and no-code tools is their potential to get non-technical users involved in creating applications. But unless your non-technical colleagues understand what they can get out of using these tools — and unless they can use the tools without coding skills — it doesn’t matter which ones your organization adopts. “It’s all about users at the end of the day,” said Leonid Belkind, co-founder and chief technology officer at Torq, which provides a no-code security automation platform, “How many tools have you seen in your lifetime become shelfware? The organization bought it and nobody uses it. That’s the biggest risk. “How do you avoid it? Find out the motivation and goals people have and match the tool to it,” he added. If you put user needs first, “the chances of it becoming shelfware are significantly lower.” It’s important to not only find out users’ needs but ask them to explain how they now complete the tasks you’re trying to automate, Belkind said. Why is it important to identify who is going to work with the tool? he asked.


When NOT To Use Apache Kafka

If your application requires sub-millisecond latency, Kafka is not the right technology. For instance, high-frequency trading is usually implemented with purpose-built proprietary commercial solutions. Always keep in mind: the lowest latency would be to not use a messaging system at all and just use shared memory. In a race to the lowest latency, Kafka will lose every time. However, for the audit log and transaction log or persistence engine parts of the exchange, it is no data loss that becomes more important than latency and Kafka wins. Most real-time use cases "only" require data processing in the millisecond to the second range. In that case, Kafka is a perfect solution. ... Kafka is not a deterministic system. Safety-critical applications cannot use it for a car engine control system, a medical system such as a heart pacemaker, or an industrial process controller. ... Kafka requires good stable network connectivity between the Kafka clients and the Kafka brokers. Hence, if the network is unstable and clients need to reconnect to the brokers all the time, then operations are challenging, and SLAs are hard to reach.


5 Deadly Sins of Software Development

Curating software from a translation of codes that is executable by a computer and understandable by a human is not an easy task. Before jumping on the development tools, you must devote a fixed timeframe to understand your client’s business. Dig deep enough and understand HOW exactly is the software going to impact the workflow of the organization and the end-users. By doing so, you’ll get more clarity on what to work on and more importantly, what not to work on. Every software developer who has attained significant success will tell you to understand the resulting benefit of the software. This will allow you to only focus on stuff that holds value, while preemptively eliminating the most obvious changes that the client’s review team would recommend. So the next time you sit in front of your computer for a new software project, go through the project’s brief to comprehend the WHY of the software before you begin coding. Making the software eloquent and interactive for the user is what every developer strives for. But while doing so, you must take care that you don’t add too many features, which could eventually overwhelm the user. This is because a confused mind denies everything.


Here’s how algorithms are made

When an algorithm is implemented and verified against the ground truth, it becomes formulated into a mathematical object that can be later used in other algorithms. An algorithm must stand the test of time, prove its value in applications, and its usefulness in other scientific and applied work. Once proven, these algorithms become abstracted, taken as proven claims that need no further investigation. They become the basis and components of other algorithms, and contribute to further work in science. But an important point to underline here is that when the problem, ground-truth, and implementation are formulated into an abstract entity, all the small details, and facts that went into creating it become invisible and tend to be ignored. “If STS has long shown that scientific objects need to be manufactured in laboratories, the heavy apparatus of these locations as well as the practical work needed to make them operative tend to vanish as soon as written claims about scientific objects become certified facts,” Jaton writes in The Constitution of Algorithms.


How to empower IT Sec and Ops teams to anticipate and resolve IT problems

Runecast is a patented enterprise IT platform created for administrators, by administrators, and is tailored to the needs of those teams and enterprise leaders. Most importantly, though, it is a proactive platform aimed at helping IT admins anticipate potential problems before they become a headache and fix potential issues before they lead to service disruptions or exploitable vulnerabilities. The objective is reflected in the name of the company and the platform: casting (tossing) rune stones is how some cultures attempted to predict the future that would happen if no changes were made in the present. Runecast Analyzer does precisely this, and then provides actionable solutions to avoid damaging situations. Its power lies in Runecast AI Knowledge Automation (RAIKA), a technology that uses natural language processing (NLP) to crawl and analyze the previously mentioned mountain of available sources of unstructured knowledge to turn it all into machine-readable rules. RAIKA plugs into many different sources: knowledge base articles, online documentation, forums, blog posts, and even curated Twitter accounts of influencers.


How to Become a Data Scientist

Becoming a data scientist does not necessarily require a master’s degree. There is a significant shortage of data scientists, and some employers are comfortable hiring people who lack a degree, but have the experience needed. The majority of employed data scientists have a master’s degree, but over 25% do not. If you have the experience, a degree is not an absolute necessity to become employed as a data scientist. (If you are genuinely good at statistics, this may be a job for you. If you are not, by nature, good at statistics, this is probably not a job for you.) Data scientists process large amounts of data, often with the goal of increasing a business’ profits. Ideally, a data scientist has a strong understanding of statistics and statistical reasoning, computer languages, and business. They process and analyze large amounts of data to provide useful, meaningful information to their employers. These interpretations are used for decision-making. To provide this information, data scientists often work with messy, unstructured data, coming from emails, social media, and smart devices. 


Edge computing and 5G: What's next for enterprise IT?

When people talk about 5G, they’re usually referring to the major telco networks (and 5G-enabled devices that connect to those networks), which have begun rolling out and will expand considerably over time. Those networks have enterprise impacts, of course. But the “next big thing” for many businesses may be private 5G networks. It’s not a perfect comparison, but a private 5G network is kind of like a private cloud – an answer to the question (among others): “What happens if I want to leverage the technology while retaining as much control as possible?” “In addition to typical 5G, increasingly enterprises are evaluating private 5G models to transform specific parts of their business,” says Joshi, the Everest Group analyst. “This combined with edge devices can meaningfully change the way enterprises work.” Joshi points to use cases such as smart stadiums, connected fleets, autonomous vehicles, smart ports, and remote health as examples where interest is already abundant and the combination of private/public 5G networks and edge architecture could flourish.


A Quick Look at Advanced IoT Sensors for the Enterprise Going Digital

Machine vision is frequently used in EIoT solutions, especially to perform quality control of products. However, these vision systems are complex and rather expensive, which makes them much more difficult for smaller companies to implement. Today, they can be replaced with modern IoT sensors, as Denso showed. Denso has developed the smallest stereo vision sensor for use in cars to help prevent collisions. These vision sensors are implemented in smart cameras and can also be used for object recognition, manufacturing process control, and product quality assurance. Small, practical equipment can be installed in a factory to monitor a large number of production points. A sensor called Visionary-T DT developed by the company Sick can detect objects at a distance of up to 160 ft. It is a 3D video sensor that uses Time-of-Flight (TOF) technology to detect the presence or absence of 3D objects. Solutions developed using this technology are candidates to be chosen to ensure the security of the Enterprise, and to protect objects or areas. 


Anonymous Extends Its Russian Cyberwar to State-Run Media

Quantifying an uptick in cyber activity in Ukraine, Israeli firm Check Point said related attacks on Ukrainian government sites and its military increased by 196% in the first three days of the conflict. And as the situation on the ground has worsened, social media giants have considered or implemented stricter moderation policies over Russian disinformation efforts. Meta, the parent company of Facebook, says in a blog post that it has taken down a network run by users in Russia and Ukraine and is targeting the latter. Meta Head of Security Policy Nathaniel Gleicher and Director of Threat Disruption David Agranovich say the network violated its policy against "coordinated inauthentic behavior." Meta's security team says users created fake personae and claimed to be based in Kyiv - posing as news editors, a former aviation engineer and an author of a scientific publication on the science of mapping water. They claim there are similarities to a takedown in April 2020 that was connected to individuals in Russia, the disputed Donbas region in Ukraine and two now-sanctioned media organizations in Crimea.


Injecting fairness into machine-learning models

The machine-learning technique the researchers studied is known as deep metric learning, which is a broad form of representation learning. In deep metric learning, a neural network learns the similarity between objects by mapping similar photos close together and dissimilar photos far apart. During training, this neural network maps images in an “embedding space” where a similarity metric between photos corresponds to the distance between them. For example, if a deep metric learning model is being used to classify bird species, it will map photos of golden finches together in one part of the embedding space and cardinals together in another part of the embedding space. Once trained, the model can effectively measure the similarity of new images it hasn’t seen before. It would learn to cluster images of an unseen bird species close together, but farther from cardinals or golden finches within the embedding space. The similarity metrics the model learns are very robust, which is why deep metric learning is so often employed for facial recognition, Dullerud says.



Quote for the day:

"In a time of universal deceit - telling the truth is a revolutionary act." -- George Orwell