Daily Tech Digest - February 24, 2022

Yann LeCun: AI Doesn​’t Need Our Supervision

Self-supervised learning (SSL) allows us to train a system to learn good representation of the inputs in a task-independent way. Because SSL training uses unlabeled data, we can use very large training sets, and get the system to learn more robust and more complete representations of the inputs. It then takes a small amount of labeled data to get good performance on any supervised task. This greatly reduces the necessary amount of labeled data [endemic to] pure supervised learning, and makes the system more robust, and more able to handle inputs that are different from the labeled training samples. It also sometimes reduces the sensitivity of the system to bias in the data—an improvement about which we’ll share more of our insights in research to be made public in the coming weeks. What’s happening now in practical AI systems is that we are moving toward larger architectures that are pretrained with SSL on large amounts of unlabeled data. These can be used for a wide variety of tasks. For example, Meta AI now has language-translation systems that can handle a couple hundred languages.


Leading from the top to create a resilient organisation

In the rush to keep operations going, many businesses made quick decisions and often, adopted the wrong services for their organisation. Our own research found that over half (53%) of UK IT decision makers believe they made unnecessary tech investments during the Covid-19 pandemic, and by speeding up or ignoring their original strategy, have hindered their long term resilience. One thing almost all businesses have recognised throughout the pandemic, is that their people are the most critical and limiting factor to their business. Employee time is valuable and by not having technology that supports them in their role, productivity will drop, and employees may become an internal threat in terms of cyber security. If businesses acknowledge that hybrid is the new normal, and their people should be the priority, they can go some way to understand how IT moves from an expense to adding value. Although most of this has stemmed from a pandemic no one could have predicted, businesses and their leaders must now make sure they haven’t created the perfect storm of a distributed, disconnected workforce that is at risk of service outages.


Details of NSA-linked Bvp47 Linux backdoor shared by researchers

The attacks employing the Bvp47 backdoor are dubbed as 'Operation Telescreen' by Pangu Lab. A telescreen was a device envisioned by George Orwell in his novel 1984 that enabled the state to remotely monitor others to control them. According to Pangu Lab researchers, the malicious code of Bvp47 was developed to give operators long-term control over compromised machines. 'The tool is well-designed, powerful, and widely adapted. Its network attack capability equipped by 0-day vulnerabilities was unstoppable, and its data acquisition under covert control was with little effort,' they said. Complex code, Linux multi-version platform adaption, segment encryption and decryption and extensive rootkit anti-tracking mechanisms are all part of Bvp47's implementation. It also features an advanced BPF engine, which is employed in advanced covert channels, as well as a communication encryption and decryption procedure. The researchers say the attribution to the Equation Group is based on the fact the sample code shows similarities with exploits contained in the encrypted archive file 'eqgrp-auction-file.tar.xz.gpg' which was posted by the Shadow Brokers after the failed auction in 2016.


Cloud computing vs fog computing vs edge computing: The future of IoT

Cloud computing is the process of delivering on-demand services or resources over the internet that allows users to gain seamless access to resources from remote locations without expending any additional time, cost or workforce. Switching from building in-house data centres to cloud computing helps the company reduce its investment and maintenance costs considerably. ... Fog computing is a type of computing architecture that utilises a series of nodes to receive and process data from IoT devices in real-time. It is a decentralised infrastructure that provides access to the entry points of various service providers to compute, store, transmit and process data over a networking area. This method significantly improves the efficiency of the process as the time utilised in the transmission and processing of data is reduced. In addition, the implementation of protocol gateways ensures that the data is secure. ... Cloud or fog data prove to be unreliable when dealing with applications that require instantaneous responses with tightly managed latency. Edge computing deals with processing persistent data situated near the data source in a region considered the ‘edge’ of the apparatus.


Data Unions Offer a New Model for User Data

One of the promises of a decentralized Web3 is the notion that as users we can all own our data. This is in contrast to Web 2.0, where the prevailing view is that we the users and our data are the product being exploited for financial gain by large centralized organizations. A data union is a scalable way to collect real-time data from individuals and package that data for sale, in a way that is mutually agreeable to both the data source and the packaging application. Much like workers joining a union in real life to rally around a common set of goals, data unions allow individuals to join these unions to aggregate data in a controlled way, complete with the ability to vote on how and where the data is used, through DAO (decentralized autonomous organization) governance. For users, one challenge to the idea of controlling your data is finding an interested buyer. Few data consumers want to go through the hassle of acquiring data from one individual at a time. Data unions solve this by aggregating data from a set of users who opt-in. 


How to protect your Kubernetes infrastructure from the Argo CD vulnerability

In terms of the impact of this vulnerability, Apiiro has determined the following (so far). Note that the following information was from Apiiro’s website at the time of the announcement and may be subject to change. Please refer to Apiiro’s website for the latest information. Here’s what we know about the vulnerability and what it could enable an attacker: The attacker can read and exfiltrate secrets, tokens, and other sensitive information residing on other applications; The attacker can “move laterally” from their application to another application’s data. The risk was given a severity rating of high given that the malicious Helm chart could potentially expose sensitive information stored on a Git repository and also “roam” through applications allowing attackers to read secrets, tokens, and sensitive data that reside within the applications. The team behind Argo CD quickly provided a patch that impacted organizations should apply as soon as possible as the vulnerability affects all versions of the tool. The patch is available via Argo CD’s GitHub repository.


Understanding your automation journey

In order to achieve shorter-term automation goals, businesses need to evaluate their existing automation needs and ask a few key questions. Are they seeking to automate mundane tasks to increase personal productivity, such as processing emails, setting up notifications or organising files? Personal productivity automation is employee-driven and used to tackle multiple tasks for productivity gains at the individual level. Are they seeking to streamline business processes, such as processing a high volume of invoices or moving data from one system to another? Business process automation (BPA) is also employee-driven but it streamlines business processes to deliver efficiencies and productivity gains across users and departments. Automation might also be an ongoing project, often referred to as an automation Centre of Excellence (CoE), which focuses on intricate, enterprise-wide automation and orchestration. CoE-driven automation is fairly complicated and has a significant influence on automating connected processes.


Going Digital in the Middle of a Pandemic

Independent work-streams allowed them to work in parallel. Does that mean we did not have any dependencies? Not really. We had a stand-up which we called as Scrum of Scrum, conducted daily, with participation from each development team, with focus on dependencies and impediment resolution during the iteration. Given the nature of program and diverse set of stakeholders, we decided to conduct consolidated program iteration planning and showcase events. Development teams would conduct their planning meetings individually. And join this program meeting to share summary of key features taken up in the iteration, and the sprint goal. Lastly, to provide stakeholders a view of how we were progressing against defined release milestones, we tracked progress against iteration goals vis-à-vis release objectives. A release was defined as a set of features required to board users from a specific Geography. We provided a one-page weekly/fortnightly program summary to senior CIO leadership and program stakeholders, with data from ALM tool, along with any blockers & issues that needed executive leadership support.


Cyber Insurance's Battle With Cyberwarfare: An IW Special Report

While the clauses were issued in the company’s marketing association bulletin and allowed individual underwriters flexibility in applying them to individual policies, they were widely interpreted as signifying a shift toward non-coverage. All of Lloyd’s cyber policies are expected to include some variation of these clauses going forward. Lloyd's of London's definition of cyberwar broadly includes “cyber operations between states which are not excluded by the definition of war, cyber war or cyber operations which have a major detrimental impact on a state.” Formal attribution is not necessary for exclusion, an important caveat that would allow for broad latitude in making determinations of whether a given event is actually cyberwar or not. “I think you're going to see a lot more of that, unless there is legislation that comes out that more specifically defines cyberwar. I don't think we're really seeing it at this point,” notes Adrian Mak, CEO of AdvisorSmith. The language in the individual contracts is “what is driving the coverage at this point. And also, interpretation of that [language].”


Digital transformation: Do's and don'ts for IT leaders to succeed

Fear is a natural reaction when we enter uncharted territory. Moreover, the digital transformation journey also requires skill, patience, and a huge financial investment, which adds an extra level of anxiety. Many leaders are uncertain about investing resources into an initiative that they are unsure of, even if there are plenty of stats available to back it up. If you are feeling uncomfortable, try to focus your energy toward embracing your digital transformation initiative and giving it everything it needs to succeed. Remind yourself that in time, you will witness the positive results of your efforts and even scale your business’s revenue. Every enterprise and organization must eventually make digitalization a strategic cornerstone to remain competitive and better serve their constituents. If convenience, scalability, and security are among your business priorities, implementing a thoughtful digital transformation initiative is essential.



Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson

Daily Tech Digest - February 23, 2022

The Metaverse Is Coming; We May Already Be in It

The metaverse has moved beyond science fiction to become a “technosocial imaginary,” a collective vision of the future held by those with the power to turn that vision into reality. Facebook recently changed its name to Meta and committed $10 billion to build out metaverse-related technology. Microsoft just announced that it was spending a record-breaking $69 billion to buy Activision Blizzard, the makers of some of the most popular massively multiplayer online games in the world, including World of Warcraft. This current vision of the metaverse goes well beyond the simple VR of my ping-pong game to eventually include augmented reality (or AR, where smart glasses project objects onto the physical world), portable digital goods and currency in the form of nonfungible tokens (NFTs) and cryptocurrency, realistic AI characters that can pass the Turing test, and brain-computer interface (BCI) technology. BCIs will eventually allow us to not only control our avatars via brain waves, but eventually, to beam signals from the metaverse directly into our brains, further muddying the waters of what is real and what is virtual.


Using Machine Learning for Fast Test Feedback to Developers and Test Suite Optimization

The necessary step of integrating source control and test result data opens up an “incidental” use case concerning the correct routing of defects in multi-team environments. Sometimes there are defects/bugs where it is not clear which team they should be assigned to. Typically, if you have more than two teams it can be cumbersome to find the correct team to take care of a fix. This can lead to a kind of defect ping-pong between the teams because no one feels responsible until the defect is finally assigned to the correct team. Since the Healthineers data also contains change management logs, there is information about defects and their fixes, e.g. which team performed a fix or which files were changed. In many cases, there are test cases connected to a defect - either existing ones when a problem is found in a test run before release or new tests added because a test gap was identified. This allows tackling the problem of this “defect hot potato”. Defects can be related to test cases in several ways, for example if a test case is mentioned in the defect’s description or if the defect management system allows explicit links between defects and test cases. 

Curious about quantum computing

As technologists, it’s our responsibility to also keep an eye on these advancements—to learn where they’re headed, to steer our business partners toward the right use cases for them, and even to help shape what they become. Quantum computing is one such technology. I find the very idea of quantum computing fascinating. It takes computer science—the hardware and software that we created in the computer industry—and blends in the fundamentals of nature, physics, and other observed sciences. I believe quantum computing is an area that will fundamentally change the world around us… eventually. But I also find that there’s a lot of hype and misinformation around quantum computing, with only a handful of experts truly in a position to discuss its current state (did you catch what I did there?). I wanted to cut through the hype and go straight to one of these experts myself to get a better understanding of where quantum computing is today and where it’s headed in the future. Introducing, Dr. John Preskill. Dr. John Preskill is a pioneer in the field of quantum computing. He is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, where he is also the Director of the Institute for Quantum Information and Matter.


Is Serverless Just a Stopover for Event-Driven Architecture?

Serverless does illustrate many desirable traits. It is easy to scale up and scale down. It’s triggered by events that are pushed rather than via a polling mechanism. Functions only consume resources based on that job’s needs, then exits and frees up resources for other workloads. Developers benefit from the abstraction of infrastructure and could deploy code easily via their CI/CD pipelines without concern as to how to provision resources. However, the point that Aniszczyk alludes to is that serverless isn’t designed for many situations including long-running applications. They can actually be more expensive to the end user than running a dedicated application in containers, a VM or on bare metal. As an opinionated solution, it forces developers into the model facilitated by the vendor. In addition, serverless doesn’t have an easy way to handle state. Finally, though serverless deployments are largely deployed in the cloud, they aren’t easily deployed across cloud providers. The tooling and mechanisms for managing serverless are very much specific to the cloud, though perhaps with the donation of Knative to the CNCF, there could be a serverless platform that could be developed and deployed with the support of the industry, much like Kubernetes has.


Why Big Tech is losing talent to crypto, web3 projects

Another example of a high-profile person leaving big tech for crypto is John deVadoss, former Managing Director (MD) at Microsoft, where he spent about 16 years of his career in a variety of roles, for example General Manager (GM) overseeing the developer platform Microsoft.NET, and most recently building Microsoft Digital from zero to half a billion dollars of business worldwide. “I built and led Architecture strategy for .NET at Microsoft; I built the first enterprise frameworks and tools for Visual Studio .Net; I lead Microsoft’s first application platform product line and strategy, and I also worked on the Azure developer experience, long before it was called Microsoft Azure,” says deVadoss in an interview with CryptoSlate. After all these years at Microsoft, deVadoss went for Neo – the “Chinese Ethereum” blockchain with high ambitions indeed. ... “I have worked on developer platforms and tools for over 25 years, and it was a natural move to build the blockchain industry’s best developer tools and experience for Neo N3, the first polyglot blockchain platform in the industry and the most developer-friendly,” deVadoss says.


Blockchain: The game-changing technology that’s about to disrupt almost every industry

Blockchain technology can offer effective solutions to banks and non-banking financial institutions (NBFCs) to improve their payment clearing and credit information systems. It can also enhance the security of online banking transactions. With blockchain, banks could combine their payment protocols with smart contracts, and this would allow them to establish multiple data points on each transaction. These data points would further enable banks to monitor their loans, track transactions, and easily manage their invoicing and financing-related activities. In a blockchain-based banking system, each user can be provided with a private key for every transaction on the ledger; this key works like a unique digital signature. So at any point, if a banking record is altered, the digital signature is rendered invalid, and the whole banking network is notified of the anomaly. ... Cryptocurrencies provide an alternative to traditional banking for people who remain unbanked, for various reasons. There use has also been suggested as a way to decouple currencies from the traditional monetary systems. For example, the hyperinflation that began in Venezuela in 2016 resulted in a steep devolution of the nation’s currency.


Behind the stalkerware network spilling the private phone data of hundreds of thousands

TechCrunch first discovered the vulnerability as part of a wider exploration of consumer-grade spyware. The vulnerability is simple, which is what makes it so damaging, allowing near-unfettered remote access to a device’s data. But efforts to privately disclose the security flaw to prevent it from being misused by nefarious actors has been met with silence both from those behind the operation and from Codero, the web company that hosts the spyware operation’s back-end server infrastructure. The nature of spyware means those targeted likely have no idea that their phone is compromised. With no expectation that the vulnerability will be fixed any time soon, TechCrunch is now revealing more about the spyware apps and the operation so that owners of compromised devices can uninstall the spyware themselves, if it’s safe to do so. Given the complexities in notifying victims, CERT/CC, the vulnerability disclosure center at Carnegie Mellon University’s Software Engineering Institute, has also published a note about the spyware.



Matter, explained: What is the next-gen smart home standard?

Matter uses a wireless technology based on Internet Protocol (IP), which Wi-Fi routers use to assign an address to your connected devices. There are no awkward handoffs or other wireless technologies to deal with by natively integrating an IP-based protocol for smart home devices. It paves the way forward to a future where all Matter certified devices will work alongside each other in synchronous harmony. As you can see, bringing our smart home devices together like this not only makes setup a breeze, but it's absolutely essential when designing a single universal smart home environment that just works. The ultimate goal here is to create a "set it and forget it" situation where these devices essentially fade into the background rather than sit in the foreground. Thankfully, Matter sounds like the thing we need to finally bridge that gap and fix the smart home situation once and for all. We have some of the biggest tech giants working together to make Matter a unified protocol in our smart homes of the future.


Mitigating Risks in Cloud Native Applications

As the shift to the work-from-anywhere model becomes mainstream and cloud applications continue to surge, it is redefining new developments like “security and observability is converging,” said Tipirneni. While DevOps and IT security have traditionally been treated as separate disciplines, their roles and responsibilities are increasingly moving toward the DevSecOps trend. “Solving the security problem and observability problem is your ability to instrument everything that is happening in the system at a very fine-grained level — from gathering the data and really making sense of the data,” said Tipirneni. “Developers try to work around security controls that are complex but bringing those two together puts the power in the developers’ hands” he added. Information security and development teams have traditionally managed Tigera’s solutions like Calico and Envoy, but for cloud-first companies who do not have legacy applications “DevOps, Cloud Ops engineers are pretty much responsible end to end,” said Tipirneni. From deploying applications to troubleshooting and managing compliance and security, “the challenge they have is that there’s just way too much on their plate to do,” Tipirneni added.


NFT use cases for businesses

NFTs have also shown capability of showing organisations the interests of their customers, without marketing teams needing to scour Internet usage data. In time, NFTs could be utilised to learn more about what customers need, before a product is purchased. Conor Svensson, founder and CEO of Web3 Labs, said: “I believe the true inflection point of adoption will be when the majority of smartphone users hold them. Whilst the technology is there to do this currently, only a minority of people keep NFTs on them. This will be key for true mass adoption. “An NFT can represent any real-world or virtual good, as it stands the greatest value outside of financial for them is the communities that are forming around holders of them. This is a marketeer’s dream, as prior to NFTs it wasn’t easy to learn a person was interested in a product or brand unless they purchased it or engaged with it by signing up for email updates, liking Twitter posts, etc. “The NFTs a person holds in a wallet can be viewed as an expression of their interests, and the fact that this is public information is a powerful tool for targeting individuals and communities.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - February 22, 2022

Partner Across Teams to Create a Cybersecurity Culture

Just because a software engineer doesn’t work on the security team doesn’t mean that security isn’t their responsibility. In addition to the standard security training, you can further empower your engineering teams by training and encouraging them to think like hackers. I was fortunate enough to work for a company some time ago that scheduled annual competitions with prizes and bragging rights. These competitions served as security training and engaged us in a series of engineering puzzles that included SQL injection, cross-site scripting (XSS), cryptography and social engineering. ... Even with well-implemented training programs and a dedicated cadre of security-minded engineers building your applications, there is still plenty for your security engineers to work on. The shared-responsibility model will reduce the risk of successful phishing attacks or other malicious activity, but it won’t remove it entirely. Ideally, security teams will move from a place where they are constantly fighting fires to one where they can engage in strategic initiatives to further improve security for the organization, automate risk detection wherever possible, and prepare your organization for the future.


Agile Doesn’t Work Without Psychological Safety

Soon after implementing agile, many organizations revert to the default position of worshiping at the altar of technical processes and tools, because cultural considerations seem abstract and difficult to operationalize. It’s easier to pay lip service to the human side and then move on to scrumming, sprinting, kanbaning, and kaizening because these processes serve as tangible, measurable, and observable indicators, giving the illusion of success and the appearance of developing agile at scale. Begin your agile transformation by framing agile as a cultural rather than a technical or mechanical implementation. In doing so, be careful not to approach culture as a workstream. A workstream is defined as the progressive completion of tasks required to finish a project. When we approach culture as a workstream within the context of agile, we classify it as something that can be completed. Culture cannot be completed. Yet I see agile teams attempting to project-manage it as part of the work breakdown structure, as if it has a beginning, middle, and end. It doesn’t.


Inside the U.K. lab that connects brains to quantum computers

While BCIs and quantum computers are undoubtedly promising technologies emerging at the same point in history, the question is why bring them together – which is exactly what the consortium of researchers from the U.K.’s University of Plymouth, Spain’s University of Valencia and University of Seville, Germany’s Kipu Quantum, and China’s Shanghai University are seeking to do. Technologists love nothing more than mashing together promising concepts or technologies in the belief that, when united, they will represent more than the sum of their parts. Sometimes this works gloriously. As the venture capitalist Andrew Chen describes in his book The Cold Start Problem, Instagram leveraged the emergence of camera-equipped smartphones and the simultaneous powerful network effects of social media to become one of the fastest-growing apps in history. Taking two must-have technologies and combining them doesn’t always work, though. Apple CEO Tim Cook once quipped that “you can converge a toaster and a refrigerator, but, you know, those things are probably not going to be pleasing to the user.”


Three ways COVID-19 is changing how banks adapt to digital technology

Bank leaders face the difficult task of balancing the traditional approach to risk management with the need to respond quickly to a crisis that has created massive changes to their operating environment. Criminal cyber activity, including fraud and phishing attacks, has increased as more employees work remotely. However, as one participant said: “We have not yet seen the massive increase in sophisticated, advance persistent threat cyber attacks that we normally associate with events like these.” As banks shift from crisis mode, their boards need to address new emerging risks, such as video and voice communication surveillance with everyone using Zoom and other platforms, data security controls for the use of personal equipment, and cases of third and fourth parties falling victim to cyber issues. ... As the economic impacts of the pandemic become clearer, banks are updating risk models and stress scenarios in an attempt to stay ahead of the curve. However, uncertainty in the operating environment continues to pose challenges. A lack of regulatory harmonization may further complicate benchmarking among peers across countries, though there is hope that this will improve soon.


The threat of quantum computing to security infrastructure

The report states:”The encryption technologies that are securing Canada’s financial systems today will one day become obsolete. If we do nothing, the financial data that underpins Canada’s economy will inevitably become more vulnerable to cyber criminals.” In the US, as noted above, the National Security Agency took an early lead in identifying the perceived threat. On January 19, 2022, an action from the US president came public. The White House issued a “Memorandum on Improving the Cybersecurity of National Security, Department of Defense and Intelligence Community Systems.” The document shows the urgency needed to address perceived major threats. It outlines major actions to avoid security lapses that would be created by quantum computers targeting critical secret data and related infrastructure. It also identifies the management responsibilities in the various agencies to implement these measure within a matter of months. This perceived threat to existing cybersecurity will generate a great deal of private industry and bring well-funded new companies into the business of transition to new security solutions.


AI fairness in banking: the next big issue in tech

“People want to be treated fairly by an agent whether artificial or not. The difference for a lot of applications is that people are not aware of the full extent of the decision making and the statistical regularities across a larger population where some of these issues can arise. There is a lot of cynicism around these decisions.” He adds that there are technical as well as organisational solutions that financial services providers need to apply. This, combined with policies of transparency about the processes in place all combine to provide an overall strategy. He adds: “The first thing is to have processes of regularly reporting on and examining and making corrections to data that is used to train models as well as to test them. “So, a simple test is representation of people that belong to legally protected categories by race, age, gender, ethnic origin and religious status to determine if there is enough data to represent each of these groups with accurate models. In addition, these is a need to determine whether there are other inputs to the model or features that could be corelated with these protected classes and have a potentially adverse or discriminatory impact on the output of the model.”


4 common misunderstandings about enterprise open source software

It might seem natural to download community-supported bits from the Internet rather than purchase an integrated product. This is especially the case when the community projects are relatively simple and self-contained or if you have reasons to develop independent expertise or do extensive customization. (Although working with a vendor to get needed changes into the upstream project is a possible alternative in the latter case.) However, if the software isn’t a differentiating capability for your business, hiring the right highly-skilled engineers is neither easy nor cheap. There’s also the ongoing support burden if your downloaded projects turn into a fork of the upstream community project. And if you don’t want them to, you’ll need to factor in the time to work in the upstream projects to get needed features added. There’s also a lot of complexity in categories like enterprise open source container platforms in the cloud-native space. Download Kubernetes? You’re just getting started. How about monitoring, distributed tracing, CI/CD, serverless, security scanning, and all the other features you’ll want in a complete platform? 


Leadership when the chips are down

Particularly noteworthy is the obsessive nature of Shackleton’s encounter with a territory so resistant to accurate perception. We risk bathos to say that the business landscape presents challenges on a par with the South Pole, yet the perceptual difficulties posed by Antarctica offer clear parallels for executives and entrepreneurs. The southernmost continent is unpredictable, unstable, and unforgiving. Compasses don’t behave normally. Much of what appears terra firma is actually floating ice, and deadly crevasses lurk under the snow. Snow blindness, a painful effect of the dazzling surroundings, can make vision itself impossible. ... Shackleton’s failings as a manager were manifest in his planning for the Heart of the Antarctic expedition. For a trip on foot of 1,720 miles to and from the Pole, his four-man unit brought food for just 91 days of hard labor, high altitude, and mind-numbing cold. His return instructions to the crew of the Nimrod, the ship that dropped off his party, were impossibly vague. 


How can banks remain relevant in the fastest growing digital market in the world?

While bolting on a digital banking system may be a quick fix for incumbents, the only way for FIs to truly keep up with the pace of change and future-proof their business is to invest in modern architecture which offers them the flexibility required to develop and deploy products and services at speed. Built with advanced customisation at their core, modern platforms enable FIs to approach product development with a different mindset to those struggling with legacy systems. As a result, FIs benefit from faster time-to-market, being able to scale up innovative digital operations, offer new products or services, and respond to ever-changing market requirements much faster. Shifting consumer behaviours, coupled with intensified competition, is making it increasingly difficult for banks in the APAC region to remain relevant. They are fighting not only to keep their loyal customer base, but stay ahead of the curve by offering customers the advanced digital services they require. Only by ensuring they have a comprehensive, future-proof system in place, underpinning their operations, will they truly be able to embrace the digital future.


Sustaining Agile Transformation – Our Experience

The organization needs to rethink and create a career roadmap for the Agile roles like Product Owner, Scrum Master, and Developers. The organization must build and enhance the self-paced learning experience, embed learning experience, develop role-based training, develop new learning areas, etc. For certain key roles, the organizations can focus on establishing academies such as Scrum Master Academy. This will ensure there is continuous learning and flow of trained Scrum Masters as and when needed. Coaching skills should be taught and embedded in Agile leaders and change agents. Ensure Leaders are trained and embrace foundational values and principles. Establishing and retaining a Central team such as a lean CoE will be very beneficial to oversee the transformation and support when needed. The organization can deliberate on the establishment of the CoE at divisional or organization levels. Collaborative forums like the CoPs, Guilds, Chapters, etc. should be established and run successfully. 



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - February 21, 2022

What’s the buzz around AGI safety

Specification in AGI systems defines a system’s goal and makes sure it aligns with the human developer’s intentions and motives. These systems follow a pre-specified algorithm that allows them to learn from data, which helps them to achieve a specific goal. Meanwhile, both the learning algorithm and the goal are given by the human designer—for example, goals like minimising a prediction error or maximising a reward. During training, the system will try to complete the objective, irrespective of how it reflects on the designer’s intent. Hence, designers should take special care and clarify an objective that will lead to the desired or optimal behaviour. If the goal is a poor proxy for the intended behaviour, the system will learn the wrong behaviour and consider it as “misspecified.” This is a likely outcome where the specified goal does not align with the desired behaviour. In order to adhere to AGI safety, the system designer must understand why it behaves the way it does and will it ever align with that of the designer. A robust set of assurance techniques has already existed in old-gen systems. 


Google Cloud CISO Phil Venables On 8 Hot Cybersecurity Topics

“A different environment compared to my career in financial services — many things the same, but many things different, especially the scale of what we do and our ability to invest even more in security than even some of the largest banks are able to invest,” Venables said. Google integrated its risk, security, compliance and privacy teams from across the company into the Google Cybersecurity Action Team announced last October. The consolidated team will provide strategic security advisory services, trust and compliance support, customer and solutions engineering, and incident response capabilities. “Those were all teams that were doing really, really good stuff, but we thought it made sense for them to be part of one integrated organization for cloud given the importance of all four of those topics, making sure that we provide even more focus on those things together,” Venable said. “That’s working out very well, and I think that’s reflected in a lot of large organizations that are aligning their risk compliance, security and privacy teams because of a lot of the commonality between the types of controls that you have to implement to drive those things effectively.”


Real-Time Policy Enforcement with Governance as Code

Cloud governance as code encourages collaboration and promotes agility. Through this approach, development, operation, security and finance teams can gain visibility into policies, and they can collaborate more effectively on policy definition and enforcement. Teams can quickly and efficiently modify policies and create new policies, and changes can be implemented in much the same way teams modify application code or underlying infrastructure in today’s agile, DevOps environments. ... Governance as code is emerging as a foundational requirement for organizations scaling operations in the cloud. It champions automated management of the complex cloud ecosystem via a human-readable, declarative, high-level language. Infrastructure and security engineering teams can adopt governance as code to enforce policies in an agile, flexible and efficient manner while reducing developer friction. With governance as code, developers can avoid the obstacles that often hinder or discourage cloud adoption altogether, allowing for greater automation of and visibility into an organization’s cloud infrastructure, unifying teams in their greater mission to achieve success.


Leveraging machine learning to find security vulnerabilities

Code security vulnerabilities can allow malicious actors to manipulate software into behaving in unintended and harmful ways. The best way to prevent such attacks is to detect and fix vulnerable code before it can be exploited. GitHub’s code scanning capabilities leverage the CodeQL analysis engine to find security vulnerabilities in source code and surface alerts in pull requests – before the vulnerable code gets merged and released. To detect vulnerabilities in a repository, the CodeQL engine first builds a database that encodes a special relational representation of the code. On that database we can then execute a series of CodeQL queries, each of which is designed to find a particular type of security problem. Many vulnerabilities are caused by a single repeating pattern: untrusted user data is not sanitized and is subsequently accidentally used in an unsafe way. For example, SQL injection is caused by using untrusted user data in a SQL query, and cross-site scripting occurs as a result of untrusted user data being written to a web page.


AI Is Helping Scientists Explain the Brain

A raging debate that erupted recently in the field of decision-making highlights these difficulties. It started with controversial findings of a 2015 paper in Science that compared two models for how the brain makes decisions, specifically perceptual ones.3 Perceptual decisions involve the brain making judgments about what sensory information it receives: Is it red or green? Is it moving to the right or to the left? Simple decisions, but with big consequences if you are at a traffic stop. To study how the brain makes them, researchers have been recording the activity of groups of neurons in animals for decades. When the firing rate of neurons is plotted and averaged over trials, it gives the appearance of a gradually rising signal, “ramping up” to a decision. ... In the standard narrative based on an influential model that has been around since the 1990s, the ramp reflects the gradual accumulation of evidence by neurons. In other words, that is how neurons signal a decision: by increasing their firing rate as they collect evidence in favor of one choice or the other until they are satisfied.


What does the future of artificial intelligence look like within the life sciences?

The biggest hurdle for scientists is being able to more regularly adopt and implement the infrastructure and existing tools needed to run their lab using AI. This is especially true for open-ended research - or when scientists don't have a predefined notion of what experiments will need to happen in what steps to reach for the desired outcome. The current infrastructure for managing lab data was largely set up in the image of lab notebooks. Many companies are tackling this problem by trying to retrofit data generated in this model to fit the structure required for more in-depth data analysis. At ECL, we’ve tackled this problem by proceduralizing the lab activities themselves, as well as the storage of the data encompassing those activities. In this way, data is comprehensive, organized, reproducible, and ready to be deployed into any given analysis model. ... As scientists and companies recognize the reproducibility and trustworthiness of data generated in a cloud lab like ECL, their focus will shift away from concern over laboratory operations and logistics and more towards the science itself. 


From The Great Resignation To The Great Return: Bringing Back The Workforce

The biggest challenge is putting enormous pressure on employees who don’t want to leave their job. Since talent leaders can’t fill open roles fast enough, employees that want to stay have had to take on the employment of multiple people in addition to their day-to-day responsibilities. In addition to that, it’s a candidate’s market, and job seekers have many job options and often have multiple offers. As a result, companies have to make hiring decisions faster and offer better benefits to attract talent and stand out among other companies. Another challenge, according to Cassady, is that employees are missing key connections points in this remote environment. “We have found that some of the key factors in retaining your workforce are that people need to feel connected to the company’s mission, the company’s leaders, and a connection to the team they work with.” In addition, she adds, “Talent leaders must continue to create communities within their company to retain their employees.”


The new rules of succession planning

First, start with the what and not the who. Doing so will lay out a more realistic and substantive framework. Second, from this vantage point, try to explicitly minimize the noise in the boardroom. Ensure that the directors are using shared, contextual definitions of core jargon, such as strategy, agility, transformation, and execution. Third, root the follow-on analyses of the candidates in that shared understanding, and base any assessments on a factual evaluation of their track records and demonstrated potential in order to minimize the bias of the decision-makers themselves. Many companies sidestep this hard work when developing their short list of candidates and rely instead on familiar paths: the CEO may have preferred candidates, or a search firm or industrial psychologist may have been asked to draft an ideal role profile or a set of competencies to prescreen internal and external candidates. This overemphasis on profiling the who of the next CEO triggers two failure points. It leans right into “great leader” biases (the notion that the right person will single-handedly solve all the company’s problems).


IT jobs: 7 hot automation skills in 2022

“One of the most important approaches to automation is infrastructure as code,” says Chris Nicholson, head of the AI team at Clipboard Health. “Infrastructure as code makes it easier to spin up and manage large clusters of compute, which in turn makes it easier to introduce new products and features quickly, and to scale in response to demand.” Kelsey Person, senior project manager at the recruiting firm LaSalle Network, agrees: Experience with infrastructure as code pops on a resume right now, because it indicates the knowledge and ability needed to help drive significant automation initiatives elsewhere. “One skill we are seeing in more demand is knowledge of DevOps tools, namely Ansible,” Person says. “It can help organizations automate and simplify tasks and can save time when developers and DevOps professionals are installing packages or configuring many servers.” The ability to write homegrown automation scripts is a mainstay of automation-centric jobs – it’s essentially the skill that never goes out of style, even as a wider range of tooling enables non-developers to automate some previously manual processes.


Why cloud-based cellular location is the solution to supply chain disruption

Cloud-based cellular location leveraging 5G, in combination with seamless roaming integrated into a WAN, provides highly accurate end-to-end visibility, starting with sub-metre accuracy on the factory floor with private networks and extending to outdoor locations whenever and wherever an asset is transported, from the beginning to end of a supply chain. Cloud-based cellular location technologies are already in use today, leveraging ubiquitous 4G/5G networks for massive IoT asset tracking applications. Their adoption is expected to increase significantly and broaden to more and more critical IoT use cases as well. According to ABI Research, overall penetration of the cloud-based cellular location installed base will reach 42% by 2026. In this period, it’s estimated that there’ll be a four-fold increase in penetration driven largely by devices on Cat-1, Cat-M, and NB-IoT networks. Asset tracking will be the main driver of growth on these networks, as cloud-based cellular location becomes more important for driving down costs. Cloud-based cellular location can enable enterprises to unlock opportunities for critical IoT, and will help revolutionise supply chain management.



Quote for the day:

"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis

Daily Tech Digest - February 20, 2022

API Management vs. Service Mesh: The Choice Doesn’t Have to Be Yours

API management is often described as a north-south traffic management pattern, which connects services and applications with external clients. This north-south pattern also applies to inter-domain traffic, as we saw earlier. Companies control access to enterprise or domain boundaries and can discern who is allowed to access the systems, precisely which resources they are allowed to access, whether read and/or write permissions, and with customizable rate limits. This architecture provides authentication, traffic mediation, security, and encryption options, along with sophisticated authorization systems. In essence, it is about helping to manage the relationships between services or APIs and multiple consumers. ... Service meshes provide the connective tissue between services, ensuring that different parts of an application can reliably and securely share data with one another. They route requests from one service to the next, optimizing how all the moving parts work together. Within cloud-native application development approaches, they help to assemble large numbers of discrete services into functional applications. 


Business Technology Consulting is a Way to Start Improving Customers as Business Leaders

IT consultants have good news: their services are still highly sought after. The COVID-19 pandemic has transformed the IT consulting industry. A combination of increased competition and more freelance and smaller specialized consultancies has created a highly competitive market. You will need to start your business on the right foot, just like any other business. IT professionals must create a detailed business plan to succeed in a highly competitive market. Structured plans should include growth, costs, marketing, sales, training, qualifications, and technology. Technology has changed the way that we live, shop and work. The technology revolution is continuing to transform everything about our lives. A robust technological foundation can help organizations increase their agility productivity and identify new business opportunities. Technology consulting can be called many things, including IT consulting for business, IT services, and IT advisory. Companies must develop a secure and efficient Information Technology strategy (IT) strategy to embark on a digital transformation journey. This is not an easy task for start-ups and corporations alike.


The Power and Possibilities of Data Science

Not only have job opportunities for data scientists cropped up everywhere, but the role has transformed the work life of millions of people who benefit from their innovations. Tasks that were once laboriously performed by people have become automated, freeing us humans in legal, financial, and corporate industries (and many others) to focus on more important and well, human work. So how did we get here, and what’s next for this growing industry? Late last year, leaders from Relativity and Text IQ, a Relativity company, gathered to talk about just that. In a Coffee + Chat session presented by Relativity’s talent team, Apoorv Agarwal, Aron Ahmadia, and Peter Haller discussed the origins of data science, where they see the industry going in the next few years, and what about artificial intelligence makes them most excited. “I think of data science as fundamentally people who love data and who believe that data can be used and leveraged to solve problems,” said Aron, director of data science at Relativity. In a previous role he worked with the U.S. Department of Defense, helping to disentangle networks of sex traffickers—and using data science to identify them.


Azure SQL Database ledger

Updatable ledger tables are ideal for application patterns that expect to issue updates and deletions to tables in your database, such as system of record (SOR) applications. Existing data patterns for your application don't need to change to enable ledger functionality. Updatable ledger tables track the history of changes to any rows in your database when transactions that perform updates or deletions occur. An updatable ledger table is a system-versioned table that contains a reference to another table with a mirrored schema. The other table is called the history table. The system uses this table to automatically store the previous version of the row each time a row in the ledger table is updated or deleted. The history table is automatically created when you create an updatable ledger table. ... Append-only ledger tables are ideal for application patterns that are insert-only, such as security information and event management (SIEM) applications. Append-only ledger tables block updates and deletions at the API level. This blocking provides more tampering protection from privileged users such as system administrators and DBAs.


Data Quality Dimensions

Data Quality dimensions compare with the way width, length, and height are used to express a physical object’s size. These Data Quality dimensions help us to understand Data Quality by its scale, and by comparing it to data measured against the same scale. Data Quality ensures an organization’s data can be processed and analyzed easily for any type of project. When the data being used is of high quality, it can be used for AI projects, business intelligence, and a variety of analytics projects. If the data contains errors or inconsistent information, the results of any project cannot be trusted. The accuracy of Data Quality can be measured using Data Quality dimensions. ... Data Quality dimensions can be used to measure (or predict) the accuracy of data. This measurement system allows data stewards to monitor Data Quality, to develop minimum thresholds, and to eliminate the root causes of data inconsistencies. However, there is currently no established standard for these measurements. Each data steward has the option of developing their own measurement system. 


What Is Web3 and How Will it Work?

Proponents envision Web3 as an internet that does not require us to hand over personal information to companies like Facebook and Google in order to use their services. The web would be powered by blockchain technology and artificial intelligence, with all information published on the public ledger of the blockchain. Similar to how cryptocurrency operates, everything would have to be verified by the network before being accepted. Online apps would theoretically let people exchange information or currency without a middleman. A Web3 internet would also be permissionless, meaning anyone could use it without having to generate access credentials or get permission from a provider. Instead of being stored on servers as it is now, the data that makes up the internet would be stored on the network. Any changes to, or movement of, that data would be recorded on the blockchain, establishing a record that would be verified by the entire network. In theory, this prevents bad actors from misusing data while establishing a clear record of where it’s going.


Social engineering: Definition, examples, and techniques

The phrase "social engineering" encompasses a wide range of behaviors, and what they all have in common is that they exploit certain universal human qualities: greed, curiosity, politeness, deference to authority, and so on. While some classic examples of social engineering take place in the "real world"—a man in a FedEx uniform bluffing his way into an office building, for example—much of our daily social interaction takes place online, and that's where most social engineering attacks happen as well. ... Fighting against all of these techniques requires vigilance and a zero-trust mindset. That can be difficult to inculcate in ordinary people; in the corporate world, security awareness training is the number one way to prevent employees from falling prey to high-stakes attacks. Employees should be aware that social engineering exists and be familiar with the most commonly used tactics. Fortunately, social engineering awareness lends itself to storytelling. And stories are much easier to understand and much more interesting than explanations of technical flaws. Quizzes and attention-grabbing or humorous posters are also effective reminders about not assuming everyone is who they say they are.


Decentralization revolutionizes the creator’s economy, but what will it bring?

Much like social tokens, nonfungible tokens (NFTs) are another innovation shaping the creator economy. Consider that the NFT-based crypto art market is now worth over $2.3 billion (as of mid-February 2022), pointing to the lucrative opportunity that artists have in accessing new monetization streams for their work. Meanwhile, NFTs can also be leveraged to engineer a new model of fan engagement as they reconcile virtual assets with real-world experiences. Enter the phygital experience — a mix of physical and digital. NFTs can be tied to real-world perks — if you’re a musician, that could mean a lifetime supply of concert tickets or VIP meet and greets and as an artist, a select number of prints in a collection — all while ensuring that these assets verifiably belong to a fan, attesting to their ownership and authenticity. As economies gradually reopen and we continue to see the eventual normalization of social activities, experiential NFTs as a tool for long-term fan engagement are likely to grow in popularity. Let’s not stop there, though: Enter interactive NFTs. These assets can change over time based on a fan’s modification to the content. 


How CSPs Are Now Using Blockchain

A fundamental issue in cloud computing is a reliance on a centralised server for data management and decision-making. Problems emerge, such as the failure of the central server, which can disrupt the entire system and result in the loss of crucial data kept on the central server. In addition, the central server is vulnerable to hacker attacks. Blockchain technology can help solve this problem because many copies of the same data are saved on various computer nodes in a decentralised system, eliminating the risk of the entire system failing if one server fails. Furthermore, data loss should not be an issue because many copies of the data are stored on various nodes. ... Leading cryptocurrency software company Blockchain achieved savings of 30 per cent by replacing its database layer with Google Cloud Spanner as it moves to managed services on Google Cloud. With millions of users across the globe relying on blockchain for information about and access to their funds, it’s no surprise that one of its core values is Sanctify Security. “Security is our top priority,” says Lewis Tuff, Blockchain’s head of platform engineering.


High Performance Decoupled Buses for IoT Displays

We exploit the fact that across almost all devices, there is similar required behavior. For example, devices have commands and data. The data is often parameters to commands, but sometimes it's a stream of pixels, although that is technically a BLOB parameter to a memory write command. Anyway, on an SPI device, you typically have an additional "DC" line that toggles between commands and data. I2C has something similar, except that the toggle is indicated by a code in the first byte of every I2C transaction. Parallel also has a DC line though it's usually called RS but it does the same thing as the SPI variant. The idea here is we are going to expand the surface area of our bus API to include everything applicable to any kind of bus, so for example, you may have begin_transaction() and end_transaction() which for SPI define transaction boundaries, but do nothing in the parallel rendition. The I2C bus is pretty straightforward, but the SPI bus and parallel buses are significantly more complicated due to having processor specific optimizations. 



Quote for the day:

"One measure of leadership is the caliber of people who choose to follow you." -- Dennis A. Peer

Daily Tech Digest - February 19, 2022

CIO Strategy for Mergers & Acquisitions

The success of merging of two organizations relies on multiple factors like, economic certainties, accurate valuations, proper identification of targets, strong due diligence processes and technology integration. However, the prominent factor among all these is technology integration i.e. merging their IT systems. The IT systems of each organization consists of a set of applications, IT infrastructure, databases, licenses, technologies and their complexities. After integration, one set of systems and their infrastructure becomes redundant. Greater the amount of duplication, higher is the redundancy leading to an increase in costs and complexity of an integration. The role of CIO and Information Technology (IT) in M&A has become increasingly important, as the need for quick turnaround time is the primary factor. The CIO’s need to be involved during the deal preparation, assessment, and due diligence phase of M&A. In addition, the CIO’s team needs to identify key IT processes, IT risks, costs and synergies of the organization.


Eight countries jointly propose principles for mutual recognition of digital IDs

There are 11 principles in total, all contained in a report [PDF] about digital identity in a COVID-19 environment, that the DIWG envisions would be used by all governments when building digital identity frameworks. The principles are openness, transparency, reusability, user-centricity, inclusion and accessibility, multilingualism, security and privacy, technology neutrality and data portability, administrative simplicity, preservation of information, and effectiveness and efficiency. According to the DIWG, the principles aim to allow for a common understanding to guide future discussions on both mutual recognition and interoperability of digital identities and infrastructure. In providing the principles, the DIWG noted that mutual recognition and interoperability of digital identities between countries is still several years away, with the group saying there are foundational activities that need to be undertaken before it can be achieved. These foundational activities include creating a definition of a common language and definitions across digital identities, assessing and aligning respective legal and policy frameworks, and creating interoperable technical models and infrastructure.


Joel Spolsky on Structuring the Web with the Block Protocol

The Block Protocol is not the first attempt, however, at bringing structure to data presented on the web. The problem, says Spolsky, is that previous attempts — such as Schema.org or Dublin Core — have included that structure as an afterthought, as homework that could be left undone without any consequence to the creator. At the same time, the primary benefit of doing that homework was often to game search engine optimization (SEO) algorithms, rather than to provide structured data to the web at large. Search engines quickly caught on to that and began ignoring the content entirely, which led to web content creators abandoning these attempts at structure. Spolsky said this led them to ask one simple question: “What’s a way we can make it so that the web can be better structured, in a way that’s actually easier to write for a web developer than if they [had] left out the structure in the first place?” ... The basic building blocks of the web — HTML and CSS — describe content and how it should be displayed in a human-readable format, “but it doesn’t describe anything about that type of data or what the data is or what it does,” said Spolsky. 


Avoiding the Achilles Heel of Non-European Cybersecurity

US-based organizations are beholden to regulations such as the CLOUD Act and the US PATRIOT Act, which pose a risk to data belonging to any other region. Any application or solution built in the US — be it concerned with cybersecurity, hosting or collaboration — is required to have a backdoor built-in, allowing third parties to access the data within, often without the owner ever knowing — particularly if they’re foreign. Moreover, on his last full day in office and following the large-scale Solar Winds attack, former President Trump signed an executive order decreeing that American IaaS cloud providers must keep a wealth of sensitive information on their foreign clients — names, physical and email addresses, national identification numbers, sources of payment, phone numbers and IP addresses — in order to help US authorities track down cyber-criminals. As these services include “destination” cloud networks, such as AWS, Microsoft Azure, and Google Cloud, it impacts many citizens and companies worldwide. 


5 Questions for Evaluating DBMS Features and Capabilities

Among RDBMSs, both SQL Server and Snowflake use a kind of umbrella data type, VARIANT, to store data of virtually any type. The labor-saving dimension of typing is much less important here. For example, in the case of the VARIANT type, the database must usually be told what to do with this data. The emphasis in this definition of data type goes to the issue of convenience: BLOB and similar types are primarily useful as a means to store data in the RDBMS irrespective of the data’s structure. Google Cloud’s implementation of a JSON “data type” in BigQuery ticks both these boxes. First, it is labor-saving, in that BigQuery knows what to do with JSON data according to its type. Second, it is convenient, in that it gives customers a means to preserve and perform operations on data serialized in JSON objects. The implemenation permits an organization to ingest JSON-formatted messages into the RDBMS (BigQuery) and to preserve them intact. Access to raw JSON data could be valuable for future use cases. It also makes it much easier for users to access and manipulate this data


Digital payments: How banks can stave off fintech challengers

To safeguard their payments business, banks must pursue two main objectives: replace their existing legacy systems and improve the payment services and functionality they offer to retail and corporate customers. In this way, banks can ensure that their provision of payment services remains intact. Some banks have tried to solve this problem by acquiring a fintech challenger. Others have sought to build their own technology from scratch – although this has been shown to carry risks. However, one of the best options for banks is to find new partners, both in terms of technology and services, which they can work with to create a more loosely defined infrastructure for payment services. This in turn, will help them to become more agile in the payments sphere, according to Frank. “Banks like JP Morgan are a standard bearer here and commit huge sums to tech investment annually,” says Frank. “The key is to target a more agile tech stack both in terms of infrastructure – that is in terms of cloud adoption, enhanced security, devices and networks, as well as applications – whether it is delivered as a Software-as-a-Service (SaaS) or a white-labelled service.”


Cloud Data Management Disrupts Storage Silos and Team Silos Too

In the context of enterprise data storage, unstructured data management has been a practice for many years, although it originated in storage vendor platforms. Now that enterprises are using many different storage technologies — block storage for database and virtualization, NAS for user and application workloads, backup solutions in the data center or in the cloud — a storage-centric approach to data management no longer fits the bill. That’s because, among other reasons, storage vendor data management solutions don’t solve the problem of managing silos of data stored on different platforms. Silos hamper visibility and governance, leading to higher costs and poor utilization. As more workloads and data move to the cloud to save money and enable flexibility and innovation, cloud data management has become a growing practice. Cloud data management (CDM) goes beyond storage to meet the ever-changing needs for data mobility and access, cost management, security and, increasingly, data monetization. 


Executive Q&A: Data Management and the Cloud

Understanding which type of cloud database is the right fit is often the biggest challenge. It’s helpful to think of cloud-native databases as being in one of two categories: platform-native systems (i.e., offerings by cloud providers themselves) or in-cloud systems offered by third-party vendors. Platform-native solutions include Azure Synapse, BigQuery, and Redshift. They offer deep integration with the provider’s cloud. Because they are highly optimized for their target infrastructure, they offer seamless and immediate interoperability with other native services. Platform-native systems are a great choice for enterprises that want to go all-in on a given cloud and are looking for simplicity of deployment and interoperability. In addition, these systems offer the considerable advantage of having to deal with a single vendor only. In contrast, in-cloud systems tout cloud independence. This seems like a great advantage at first. However, moving hundreds of terabytes between clouds has its own challenges. In addition, customers inevitably end up using other platform-native services that are only available on a given cloud, which further reduces the perceived advantage of cloud independence.


The metaverse is a new word for an old idea

These are good conversations to have. But we would be remiss if we didn’t take a step back to ask, not what the metaverse is or who will make it, but where it comes from—both in a literal sense and also in the ideas it embodies. Who invented it, if it was indeed invented? And what about earlier constructed, imagined, augmented, or virtual worlds? What can they tell us about how to enact the metaverse now, about its perils and its possibilities? There is an easy seductiveness to stories that cast a technology as brand-new, or at the very least that don’t belabor long, complicated histories. Seen this way, the future is a space of reinvention and possibility, rather than something intimately connected to our present and our past. But histories are more than just backstories. They are backbones and blueprints and maps to territories that have already been traversed. Knowing the history of a technology, or the ideas it embodies, can provide better questions, reveal potential pitfalls and lessons already learned, and open a window onto the lives of those who learned them. 


Slow Down !! Cloud is Not for Everyone

“Most often It’s not the main course but Desserts that bloat your Bill” In the cloud, it’s not only the cost of compute and memory, but the cost of lock-in. Assume you have an on-prem license of a database enterprise edition that couldn’t be ported to the cloud (incompatibility or contractual complications or much higher cloud licenses) and you opt to move into a native DB offered by your chosen cloud provider. What might appear as straight-cut migration efforts is basically a much deeper trap of locking you in with your cloud vendor. As the first step, you need to train your workforce; then slowly, you will be mandated to rewrite or replace all the homegrown and/or SAS features of your product to be compatible with the new service. These efforts are something that was never part of your earlier plan but now has become a critical necessity to keep the lights on. Say after a certain period when you realize the cloud service is not a great fit and you decide to shift back or move-on to a better alternate there comes the insidious lock-in effect. They make such onward movement particularly difficult – you need to burn significant dollars to migrate out.



Quote for the day:

"When people talk, listen completely. Most people never listen." -- Ernest Hemingway