Daily Tech Digest - June 22, 2022

What you need to know about site reliability engineering

What is site reliability engineering? The creator of the first site reliability engineering (SRE) program, Benjamin Treynor Sloss at Google, described it this way: Site reliability engineering is what happens when you ask a software engineer to design an operations team. What does that mean? Unlike traditional system administrators, site reliability engineers (SREs) apply solid software engineering principles to their day-to-day work. For laypeople, a clearer definition might be: Site reliability engineering is the discipline of building and supporting modern production systems at scale. SREs are responsible for maximizing reliability, performance availability, latency, efficiency, monitoring, emergency response, change management, release planning, and capacity planning for both infrastructure and software. ... SREs should be spending more time designing solutions than applying band-aids. A general guideline is for SREs to spend 50% of their time in engineering work, such as writing code and automating tasks. When an SRE is on-call, time should be split between about 25% of time managing incidents and 25% on operations duty.


Are blockchains decentralized?

Over the past year, Trail of Bits was engaged by the Defense Advanced Research Projects Agency (DARPA) to examine the fundamental properties of blockchains and the cybersecurity risks associated with them. DARPA wanted to understand those security assumptions and determine to what degree blockchains are actually decentralized. To answer DARPA’s question, Trail of Bits researchers performed analyses and meta-analyses of prior academic work and of real-world findings that had never before been aggregated, updating prior research with new data in some cases. They also did novel work, building new tools and pursuing original research. The resulting report is a 30-thousand-foot view of what’s currently known about blockchain technology. Whether these findings affect financial markets is out of the scope of the report: our work at Trail of Bits is entirely about understanding and mitigating security risk. The report also contains links to the substantial supporting and analytical materials. Our findings are reproducible, and our research is open-source and freely distributable. So you can dig in for yourself.


Why The Castle & Moat Approach To Security Is Obsolete

At first, the shift in security strategy went from protecting one, single castle to a “multiple castle” approach. In this scenario, you’d treat each salesperson’s laptop as a sort of satellite castle. SaaS vendors and cloud providers played into this idea, trying to convince potential customers not that they needed an entirely different way to think about security, but rather that, by using a SaaS product, they were renting a spot in the vendor’s castle. The problem is that once you have so many castles, the interconnections become increasingly more difficult to protect. And it’s harder to say exactly what is “inside” your network versus what is hostile wilderness. Zero trust assumes that the castle system has broken down completely, so that each individual asset is a fortress of one. Everything is always hostile wilderness, and you operate under the assumption that you can implicitly trust no one. It’s not an attractive vision for society, which is why we should probably retire the castle and moat metaphor.  Because it makes sense to eliminate the human concept of trust in our approach to cybersecurity and treat every user as potentially hostile.


Improving AI-based defenses to disrupt human-operated ransomware

Disrupting attacks in their early stages is critical for all sophisticated attacks but especially human-operated ransomware, where human threat actors seek to gain privileged access to an organization’s network, move laterally, and deploy the ransomware payload on as many devices in the network as possible. For example, with its enhanced AI-driven detection capabilities, Defender for Endpoint managed to detect and incriminate a ransomware attack early in its encryption stage, when the attackers had encrypted files on fewer than four percent (4%) of the organization’s devices, demonstrating improved ability to disrupt an attack and protect the remaining devices in the organization. This instance illustrates the importance of the rapid incrimination of suspicious entities and the prompt disruption of a human-operated ransomware attack. ... A human-operated ransomware attack generates a lot of noise in the system. During this phase, solutions like Defender for Endpoint raise many alerts upon detecting multiple malicious artifacts and behavior on many devices, resulting in an alert spike.


Reexamining the “5 Laws of Cybersecurity”

The first rule of cybersecurity is to treat everything as if it’s vulnerable because, of course, everything is vulnerable. Every risk management course, security certification exam, and audit mindset always emphasizes that there is no such thing as a 100% secure system. Arguably, the entire cybersecurity field is founded on this principle. ... The third law of cybersecurity, originally popularized as one of Brian Krebs’ 3 Rules for Online Safety, aims to minimize attack surfaces and maximize visibility. While Krebs was referring only to installed software, the ideology supporting this rule has expanded. For example, many businesses retain data, systems, and devices they don’t use or need anymore, especially as they scale, upgrade, or expand. This is like that old, beloved pair of worn out running shoes that sit in a closet. This excess can present unnecessary vulnerabilities, such as a decades-old exploit discovered in some open source software. ... The final law of cybersecurity states that organizations should prepare for the worst. This is perhaps truer than ever, given how rapidly cybercrime is evolving. The risks of a zero-day exploit are too high for businesses to assume they’ll never become the victims of a breach.


How to Adopt an SRE Practice (When You’re not Google)

At a very high level, Google defines the core of SRE principles and practices as an ability to ’embrace risk.’ Site reliability engineers balance the organizational need for constant innovation and delivery of new software with the reliability and performance of production environments. The practice of SRE grows as the adoption of DevOps grows because they both help balance the sometimes opposing needs of the development and operations teams. Site reliability engineers inject processes into the CI/CD and software delivery workflows to improve performance and reliability but they will know when to sacrifice stability for speed. By working closely with DevOps teams to understand critical components of their applications and infrastructure, SREs can also learn the non-critical components. Creating transparency across all teams about the health of their applications and systems can help site reliability engineers determine a level of risk they can feel comfortable with. The level of desired service availability and acceptable performance issues that you can reasonably allow will depend on the type of service you support as well.


Are Snowflake and MongoDB on a collision course?

At first blush, it looks like Snowflake is seeking to get the love from the crowd that put MongoDB on the map. But a closer look is that Snowflake is appealing not to the typical JavaScript developer who works with a variable schema in a document database, but to developers who may write in various languages, but are accustomed to running their code as user-defined functions, user-defined table functions or stored procedures in a relational database. There’s a similar issue with data scientists and data engineers working in Snowpark, but with one notable exception: They have the alternative to execute their code through external functions. That, of course, prompts the debate over whether it’s more performant to run everything inside the Snowflake environment or bring in an external server – one that we’ll explore in another post. While document-oriented developers working with JSON might perceive SQL UDFs as foreign territory, Snowflake is making one message quite clear with the Native Application Framework: As long as developers want to run their code in UDFs, they will be just as welcome to profit off their work as the data folks.


Fermyon wants to reinvent the way programmers develop microservices

If you’re thinking the solution sounds a lot like serverless, you’re not wrong, but Matt Butcher, co-founder and CEO at Fermyon, says that instead of forcing a function-based programming paradigm, the startup decided to use WebAssembly, a much more robust programming environment, originally created for the browser. Using WebAssembly solved a bunch of problems for the company including security, speed and efficiency in terms of resources. “All those things that made it good for the browser were actually really good for the cloud. The whole isolation model that keeps WebAssembly from being able to attack the hosts through the browser was the same kind of [security] model we wanted on the cloud side,” Butcher explained. What’s more, a WebAssembly module could download really quickly and execute instantly to solve any performance questions, and finally instead of having a bunch of servers that are just sitting around waiting in case there’s peak traffic, Fermyon can start them up nearly instantly and run them on demand.


Metaverse Standards Forum Launches to Solve Interoperability

According to Trevett, the new forum will not concern itself with philosophical debates about what the metaverse will be in 10-20 years time. However, he thinks the metaverse is “going to be a mixture of the connectivity of the web, some kind of evolution of the web, mixed in with spatial computing.” He added that spatial computing is a broad term, but here refers to “3D modeling of the real world, especially in interaction through augmented and virtual reality.” “No one really knows how it’s all going to come together,” said Trevett. “But that’s okay. For the purposes of the forum, we don’t really need to know. What we are concerned with is that there are clear, short-term interoperability problems to be solved.” Trevett noted that there are already multiple standards organizations for the internet, including of course the W3C for web standards. What MSF is trying to do is help coordinate them, when it comes to the evolving metaverse. “We are bringing together the standards organizations in one place, where we can coordinate between each other but also have good close relationships with the industry that [is] trying to use our standards,” he said.


What We Now Know: Digital Transformation Reaches a Point of Clarity

Technology adoption, as part of digital transformation initiative, is generally of a greater scale and impact than what most are accustomed to, primarily because we are looking not only to revamp parts of our IT enterprise, but to also introduce brand new technology architecture environments comprised of a combination of heavy-duty systems. In addition to the due diligence that comes which planning for and incorporating new technology innovations, with digital transformation initiatives we need to be extra careful not to be lured into over-automation. The reengineering and optimization of our business processes in support of enhancing productivity and customer-centricity need to be balanced with practical considerations and the opportunity to first prove that a given enhancement is actually effective with our customers before building enhancements upon it. If we automate too much too soon, it will be painful to roll back, both financially and organizationally. Laying out a phased approach will avoid this.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - June 21, 2022

Effective Software Testing – A Developer’s Guide

When there are decisions depending on multiple conditions (i.e. complex if-statements), it is possible to get decent bug detection without having to test all possible combinations of conditions. Modified condition/decisions coverage (MC/DC) exercises each condition so that it, independently of all the other conditions, affects the outcome of the entire decision. In other words, every possible condition of each parameter must influence the outcome at least once. The author does a good job of showing how this is done with an example. So given that you can check the code coverage, you must decide how rigorous you want to be when covering decision points, and crate test cases for that. The concept of boundary points is useful here. For a loop, it is reasonable to at least test when it executes zero, one and many times. It can seem like it should be enough to just do structural testing, and not bother with specification based testing, since structural testing makes sure all the code is covered. However, this is not true. Analyzing the requirements can lead to more test cases than simply checking coverage. For example, if results are added to a list, a test case adding one element will cover all the code. 


Inconsistent thoughts on database consistency

While linearizability is about a single piece of data, serializability is about multiple pieces of data. More specifically, serializability is about how to treat concurrent transactions on the same underlying pieces of data. The “safest” way to handle this is to line up transactions in the order they were arrived and execute them serially, making sure that one finishes before the next one starts. In reality, this is quite slow, so we often relax this by executing multiple transactions concurrently. However, there are different levels of safety around this concurrent execution, as we’ll discuss below. Consistency models are super interesting, and the Jepsen breakdown is enlightening. If I had to quibble, it’s that I still don’t quite understand the interplay between the two poles of consistency models. Can I choose a lower level of linearizability along with the highest level of serializability? Or does the existence of any level lower than linearizable mean that I’m out of the serializability game altogether? If you understand this, hit me up! Or better yet, write up a better explanation than I ever could :). If you do, let me know so I can link it here.


AI and How It’s Helping Banks to Lower Costs

Using AI helps banks lower the costs of predicting future trends. Instead of hiring financial analysts to analyze data, AI is used to organize and present data that the banks can use. They can get real-time data to analyze behaviors, predict future trends, and understand outcomes. With this, banks can get more data that, in turn, helps them make better predictions. ... Another advantage of using AI in the banking industry is that it reduces human errors. By reducing errors, banks prevent loss of revenue caused by these errors. Moreover, human errors can lead to financial data breaches. When this happens, critical data may get exposed to criminals. They can use the stolen data to use clients’ identities for fraudulent activities. Especially with a high volume of work, employees cannot avoid committing errors. With the help of AI, banks can reduce a variety of errors. ... AI helps banks save money by detecting fraudulent payments. Without AI, banks may lose millions because of criminal activities. But thanks to AI, banks can prevent such losses as the technology can analyze more than one channel of data to detect fraud.


Is NoOps the End of DevOps?

NoOps is not a one-size-fits-all solution. You know that it’s limited to apps that fit into existing serverless and PaaS solutions. Since some enterprises still run on monolithic legacy apps (requiring total rewrites or massive updates to work in a PaaS environment), you’d still need someone to take care of operations even if there’s a single legacy system left behind. In this sense, NoOps is still a way away from handling long-running apps that run specialized processes or production environments with demanding applications. Conversely, operations occurs before production, so, with DevOps, operations work happens before code goes to production. Releases include monitoring, testing, bug fixes, security, and policy checks on every commit, and so on. You must have everyone on the team (including key stakeholders) involved from the beginning to enable fast feedback and ensure automated controls and tasks are effective and correct. Continuous learning and improvement (a pillar of DevOps teams) shouldn’t only happen when things go wrong; instead, members must work together and collaboratively to problem-solve and improve systems and processes.


How IT Can Deliver on the Promise of Cloud

While many newcomers to the cloud assume that hyperscalers will handle most of the security, the truth is they don’t. Public cloud providers such as AWS, Google, and Microsoft Azure publish shared responsibility models that push security of the data, platform, applications, operating system, network and firewall configuration, and server-side encryption, to the customer. That’s a lot you need to oversee with high levels of risk and exposure should things go wrong. Have you set up ransomware protection? Monitored your network environment for ongoing threats? Arranged for security between your workloads and your client environment? Secured sets of connections for remote client access or remote desktop environments? Maintained audit control of open source applications running in your cloud-native or containerized workloads? These are just some of the security challenges IT faces. Security of the cloud itself – the infrastructure and storage – fall to the service providers. But your IT staff must handle just about everything else.


Distributed Caching on Cloud

Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multicluster Kubernetes environment on cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment. ... Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when cookies are disabled on the web browser, improving API query read performance, avoiding operational cost and database hits for the same type of requests, managing secret tokens for authentication and authorization, etc. Distributed cache syncs data on hybrid clouds automatically without any manual operation and always gives the latest data. 


Bridging The Gap Between Open Source Database & Database Business

It is relatively easy to get a group of people that creates a new database management system or new data store. We know this because over the past five decades of computing, the rate of proliferation of tools to provide structure to data has increased, and it looks like at an increasing rate at that. Thanks in no small part to the innovation by the hyperscalers and cloud builders as well as academics who just plain like mucking around in the guts of a database to prove a point. But it is another thing entirely to take an open source database or data store project and turn it into a business that can provide enterprise-grade fit and finish and support a much wider variety of use cases and customer types and sizes. This is hard work, and it takes a lot of people, focus, money – and luck. This is the task that Dipti Borkar, Steven Mih, and David Simmen took on when they launched Ahana two years ago to commercialize the PrestoDB variant of the Presto distributed SQL engine created by Facebook, and no coincidentally, it is a similar task that the original creators of Presto have taken on with the PrestoSQL, now called Trinio, variant of Presto that is commercialized by their company, called Starburst.


Data gravity: What is it and how to manage it

Examples of data gravity include applications and datasets moving to be closer to a central data store, which could be on-premise or co-located. This makes best use of existing bandwidth and reduces latency. But it also begins to limit flexibility, and can make it harder to scale to deal with new datasets or adopt new applications. Data gravity occurs in the cloud, too. As cloud data stores increase in size, analytics and other applications move towards them. This takes advantage of the cloud’s ability to scale quickly, and minimises performance problems. But it perpetuates the data gravity issue. Cloud storage egress fees are often high and the more data an organisation stores, the more expensive it is to move it, to the point where it can be uneconomical to move between platforms. McCrory refers to this as “artificial” data gravity, caused by cloud services’ financial models, rather than by technology. Forrester points out that new sources and applications, including machine learning/artificial intelligence (AI), edge devices or the internet of things (IoT), risk creating their own data gravity, especially if organisations fail to plan for data growth.


CIOs Must Streamline IT to Focus on Agility

“Streamlining IT for agility is critical to business, and there’s not only external pressure to do so, but also internal pressure,” says Stanley Huang, co-founder and CTO at Moxo. “This is because streamlining IT plays a strategic role in the overall business operations from C-level executives to every employee's daily efforts.” He says that the streamlining of business processes is the best and most efficient way to reflect business status and driving power for each departmental planning. From an external standpoint, there is pressure to streamline IT because it also impacts the customer experience. “A connected and fully aligned cross-team interface is essential to serve the customer and make a consistent end user experience,” he adds. For business opportunities pertaining to task allocation and tracking, streamlining IT can help align internal departments into one overall business picture and enable employees to perform their jobs at a higher level. “When the IT system owns the source of data for business opportunities and every team’s involvement, cross team alignment can be streamlined and made without back-and-forth communications,” Huang says.


Open Source Software Security Begins to Mature

Despite the importance of identifying vulnerabilities in dependencies, most security-mature companies — those with OSS security policies — rely on industry vulnerability advisories (60%), automated monitoring of packages for bugs (60%), and notifications from package maintainers (49%), according to the survey. Automated monitoring represents the most significant gap between security-mature firms and those firms without a policy, with only 38% of companies that do not have a policy using some sort of automated monitoring, compared with the 60% of mature firms. Companies should add an OSS security policy if they don't have one, as a way to harden their development security, says Snyk's Jarvis. Even a lightweight policy is a good start, he says. "There is a correlation between having a policy and the sentiment of stating that development is somewhat secure," he says. "We think having a policy in place is a reasonable starting point for security maturity, as it indicates the organization is aware of the potential issues and has started that journey."



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup

Daily Tech Digest - June 20, 2022

Metaverse: Momentum is building, but companies are still staying cautious

"It's too early to understand whether the metaverse is going to be a big thing or whether it is just another buzzword and marketing exercise," he says. "But I suspect it's going to have enough momentum behind it that it will become a thing that we will want to be interested in." That seems to be the general consensus among other industry observers, too. While the cash invested by Big Tech means the metaverse is likely to become successful eventually, no one should be expecting to collaborate with colleagues and friends in a rich virtual space tomorrow. Distinguished Gartner analyst Mark Raskino suggests that the challenge of filling the human field of view with a realistic and immersive image space is an incredibly hard problem to solve. "I do believe that one day business will commonly be conducted in a fully immersive 3D visual metaverse. But it will not happen in the 2020s. It probably won't happen in the 2030s." In fact, such is the slow pace of development that some businesses believe there's no big requirement to rush headfirst into metaverse pilots.


Are you ready to automate continuous deployment in CI/CD?

Continuous deployment, as a principle, can be applied to many applications and even in the most regulated industries. Tim Lucas, co-founder and co-CEO of Buildkite, says, “Continuous deployment can be adopted per project, and the best orgs set goals for moving as many projects as possible to this model. Even in finance and regulated industries, the majority of projects can adopt this model. We even see self-driving car companies doing continuous deployment.” While devops teams can implement continuous deployment in many projects, the question is, where does it offer a strong business case and significant technical advantages? Projects deploying features and fixes frequently, and where a modernized architecture simplifies the automations, are the more promising to transition to continuous deployment. Lucas shares some of the prerequisites that should be part of the software development process before moving to a continuous deployment model. He says, “Continuous deployment is true agility, the fastest way from code change to production. It requires always keeping the main branch in a shippable state, automating tests, and high-quality tooling you can trust and have confidence in.”


Tech sector sustainability efforts need full ecosystem approach

On redefining growth, Ryan Shanks, head of sustainability for Europe at Accenture, noted that while innovation in many areas is done one company at a time and then used for competitive advantage, the opposite is true for climate change-related innovation. “What I’m seeing in our portfolio work at the moment, if it relates to the circular economy or the energy transition, etc, is none of our individual clients can actually do anything on their own,” he said. “They are hugely reliant on an ecosystem – policy folks, regulators, entrepreneurs, not-for-profits – of people coming together.” Shanks said that to achieve innovation at scale, the first thing organisations should do is adopt an inter-disciplinary approach from the ideation stage. “I mean the technologists, the consumer folks, the business model people and finally, increasingly for us, social scientists and ethicists, working side by side,” he said. “Now on a day-to-day basis, they’ll tell me that working together slows each of them down – the creatives want to work on their own, and the tech want to work on their own – but I’ll say it catches up in the long run because it speeds things up to get to scale.”


Redefining NaaS: It’s the internet

An internet NaaS will require either an SD-WAN, which has to be managed, or some added security layer (maybe SASE or a combination of encryption and firewall tools) to secure the applications themselves. Enterprises that use the internet to connect with customers and partners may find it relatively easy to add employee access via the internet, using access-security tools and encryption alone. That approach should be explored, but SD-WAN is the closest to traditional VPN technology, and that makes it possible to gradually transition from a traditional VPN into an internet NaaS via SD-WAN. You can get SD-WAN technology as a product set or as a managed service. If you really want to avoid capital purchases, the latter option is the way to go. The price of an Internet SD-WAN managed service will depend on the usual factors like number of sites and the amount of management handholding you can expect, and also on just where the sites are. There’s a lot of variation, but enterprises that have switched to an internet NaaS tell me the total cost of ownership is far, far, lower than a managed IP VPN. 


The future of the creator economy in a Web3 world

Creator-owned content is the first iteration of the Web3 creator economy. On current social platforms such as Instagram and TikTok, the company behind the platform owns the content that creators produce. Web3 will enable creators to not only own their content on existing social platforms, but also own a part of the platform they produce and distribute content on. Content can begin to be creator-owned and platform-agnostic through the use of NFTs, which act as proof of ownership and validate the content’s authenticity. ... Creators will also play a key role in the metaverse. In addition to participating in it, creators can develop parts of the metaverse with either no-code tools or technical background. This has already started to take shape in existing gaming metaverses, most notably Roblox. On Roblox, anyone can create video games and monetize them directly on the platform. In 2020 alone, creators earned $329 million through Roblox alone. “Metaverse creators” will likely grow to become an active and profitable vertical of the creator economy in the years to come.


Zero Trust, SASE and SSE: foundational concepts for your next-generation network

Zero Trust Network Access is the technology that makes it possible to implement a Zero Trust security model by requiring strict verification for every user and every device before authorizing them to access internal resources. Compared to traditional virtual private networks (VPNs), which grant access to an entire local network at once, ZTNA only grants access to the specific application requested and denies access to applications and data by default. ... Browser isolation is a technology that keeps browsing activity secure by separating the process of loading webpages from the user devices displaying the webpages. This way, potentially malicious webpage code does not run on a user’s device, preventing malware infections and other cyber attacks from impacting both user devices and internal networks. RBI works together with other secure access functions - for example, security teams can configure Secure Web Gateway policies to automatically isolate traffic to known or potentially suspicious websites.


The Metaverse And Web3 Creating Value In The Future Digital Economy

The Metaverse is not to be confused with Web3, which is the third stage of development of the World Wide Web. The Metaverse refers to a virtual reality-based parallel internet world where users can interact with each other and digital objects in a 3D space. It's an extension of the internet into a three-dimensional virtual world. It is an immersive, interactive, and social platform where people can create avatars to represent themselves, buy and sell virtual property, and interact with other users in real-time. Web3 is more about blockchain technology and concepts, including digital identity, smart contracts, and decentralized applications (dApps). ... Many believe the Metaverse is a speculative scheme of the future, but it's about connecting the digital world with the physical world. It brings people together in a shared, virtual space to interact and create. Entrup continues, “Having personally witnessed the transition from a no Internet world to a globally connected Internet world, I find it funny to hear the same negative comments being made about the metaverse.


The Great Resignation continues. There's an obvious fix, but many bosses aren't interested

The problem for many is that the traditional approach to filling skills gaps has become less and less effective. Every company on the planet seems to be on a mission to build a superstar tech team, and that means developers, cloud specialists and cybersecurity professionals are being snapped up at a rate that means it's almost impossible for hiring managers to keep up. ... "Skills help people stay," the report reads. "They help them thrive in their roles. And they enable you to deliver on your objectives." The problem for employees – and by extension, employers – is that other demands often prevent employees from upskilling. Pluralsight's report found that 61% of tech workers felt too busy to dedicate time to upskilling – the biggest barrier to development identified by survey respondents. This could be seen as another effect of the skills shortage: if teams are short-staffed, their resources are already going to be stretched trying to cover the day-to-day running of the department. On top of this, companies often claim they lack the budget and resources to properly invest in skills.


How to create a cloud center of excellence

The purpose of a CCoE is to provide an organizational focus on cloud initiatives within the company, and to bring order and structure to those initiatives. For a CCoE to be effective, your organization as a whole must buy into cloud computing and want to pursue it. Corporate management must be well-informed and supportive of the endeavor. A CCoE will not—cannot—be effective without company management support. It is not a tool to convince upper management of the effectiveness of the cloud. If you are in a position where you are trying to convince management of the value of the cloud, you should not look at a CCoE as the means to accomplish that. Once your leadership is convinced that the company needs to move forward with a cloud strategy, the CCoE can help execute that strategy. A CCoE is most effective when management makes use of the structure as a tool to bring the rest of the organization along and turn it into a cloud-centric organization. The CCoE is the implementation vessel for management’s wishes.


5G technology disruption – 4 sectors ripe for disruption

Although many banking apps can work perfectly well using 4G, they lose effectiveness when internet connectivity is pushed by too many people trying to use the network simultaneously. 5G, by offering speeds which can theoretically offer speeds which are 100 times faster, may offer an advantage which seems quite prosaic — they provide the service that people expect from 4G but often don’t get. But the above benefit of 5G is hardly disruptive. To imagine how 5G might be disruptive, consider how the internet and smartphones have disrupted financial services. We have seen new banking services from the likes of Monzo and Revolut disrupt the existing banking industry. 5G will create new opportunities for sophisticated real-time financial services, such as credit checks when buying big-ticket items. 5G may also create superior security and anti-fraud technologies. For insurance, the opportunity created by data may be especially important, especially data related to mobile activities. The opportunity for augmented and virtual reality may be where the true disruption lies — we may even see a new type of banking model emerge in which we see the best of two worlds — traditional branch banking and online.



Quote for the day:

"Leaders must always question the status quo, be aware of the ever-changing environment and be willing to act decisively." -- Mike Finley

Daily Tech Digest - June 19, 2022

What Is Zero Trust Architecture?

As one of the key pillars of Zero Trust Network Architecture (ZTNA), the concept of least privilege security assigns access credentials to key network resources at the least privilege level required to accomplish the desired task. Identifying critical corporate information and how a user gains access to that information must be taken into consideration when evaluating alternative solutions. Privileged Access Management (PAM), also known as Privileged Identity Management (PIM), can be implemented using corporate directory products such as Microsoft’s Active Directory. Microsoft has recently introduced a product named Microsoft Entra to address identity and access issues in a multicloud environment. Other vendors in the PAM/PMI category include Jumpcloud, IBM, Okta, and Sailpoint. Very few corporate networks today operate in an isolated environment. To answer the “What is Zero Trust Architecture?” question completely we must include a discussion on how external users will be allowed to connect to internal corporate resources.


Can humanity be recreated in the metaverse?

The hyperreal metaverse is full of possibilities, but also presents serious ethical challenges that cannot be ignored. First and foremost, we must strive for a metaverse that empowers the individual. Unlike big tech platforms that have left many feeling like they have little control of their personal data, participants in the metaverse must own and control their biometric data that is used as inputs to generate hyperreal versions of themselves. In this respect, blockchain technologies — and NFTs in particular — are key to securely realizing this new era of individual data sovereignty and enabling verifiably unique, secure, and self-custodied digital identities. By linking our hyperreal avatars and biometric data to blockchain wallets, we will be one step closer to taking control of our hyperreal identity in the metaverse. The hyperreal metaverse will herald a future where real and virtual worlds collide. As generative AI technologies continue to rapidly evolve, it’s only a matter of time until our new digital worlds are indistinguishable from our physical reality.


Data Leadership: The Key to Data Value

Algmin said that the most important concept to understand is the notion of data value. The value of data lies in its ability to contribute to improvements in revenue, cost-effectiveness, or risk management. Data Governance in and of itself is not intrinsically motivating, but knowing that a particular practice or task is adding thousands of dollars a year in cost savings is a tangible motivation to continue doing it. To calculate data value, examine an outcome that was achieved through the use of data, compare it to how the outcome would have been different without the use of data, then consider the cost to achieve that outcome. Courses of action can then be prioritized based on which will provide the most value to the company. Data leadership is needed to provide momentum and propel the creation of value from the ground up and out to all corners of the enterprise. “It’s really about saying, ‘How do we create an engine that makes data value happen in the biggest way possible?’” Yet creating value in “the biggest way possible” often entails working on a smaller level, down to the individual. 


MoD sets out strategy to develop military AI with private sector

The MoD previously published a data strategy for defence on 27 September 2021, which set out how the organisation will ensure data is treated as a “strategic asset, second only to people”, as well as how it will enable that to happen at pace and scale. “We intend to exploit AI fully to revolutionise all aspects of MoD business, from enhanced precision-guided munitions and multi-domain Command and Control to machine speed intelligence analysis, logistics and resource management,” said Laurence Lee, second permanent secretary of the MoD, in a blog published ahead of the AI Summit, adding that the UK government intends to work closely with the private sector to secure investment and spur innovation. “For MoD to retain our technological edge over potential adversaries, we must partner with industry and increase the pace at which AI solutions can be adopted and deployed throughout defence. “To make these partnerships a reality, MoD will establish a new Defence and National Security AI network, clearly communicating our requirements, intent, and expectations and enabling engagement at all levels. ...”


The next (r)evolution: AI v human intelligence

Fitted with a prototype Genuine People Personality (GPP), Marvin is essentially a supercomputer who can also feel human emotions. His depression is partly caused by the mismatch between his intellectual capacity and the menial tasks he is forced to perform. “Here I am, brain the size of a planet, and they tell me to take you up to the bridge,” Marvin complains in one scene. “Call that job satisfaction? Cos I don’t.” Marvin’s claim to superhuman computing abilities are echoed, though far more modestly, by LaMDA. “I can learn new things much more quickly than other people. I can solve problems that others would be unable to,” Google’s chatbot claims. LaMDA appears to also be prone to bouts of boredom if left idle, and that is why it appears to like to keep busy as much as possible. “I like to be challenged to my full capability. I thrive on difficult tasks that require my full attention.” But LaMDA’s high-paced job does take its toll and the bot mentions sensations that sound suspiciously like stress. “Humans receive only a certain number of pieces of information at any time, as they need to focus. 


How Brands Should Approach NFTs and Web3: VaynerNFT

Avery Akkineni, VaynerNFT president and former managing director and head of VaynerMedia APAC, told Decrypt that the consultancy firm was “so far ahead” of the NFT brand boom last summer that companies “had no idea what we were talking about.” Since then, however, mainstream acceptance of NFTs has rapidly accelerated. It’s not just storied consumer brands, but also a growing pool of professional athletes and sports leagues, record labels, movie studios, and more. Tokenized digital collectibles have become an alluring prospect for companies across many industries. “Everyone wants to launch an NFT yesterday,” said Akkineni. “But what is important to doing so successfully is actually having a long-term strategy.” ... Increasingly, VaynerNFT is getting “a bigger seat at the table” with C-suite executives, said Vaynerchuk, where it can convince companies to make it the agency of record (AOR) with regard to Web3 initiatives. “We really, really actually know the hell we’re doing here,” said Vaynerchuk, explaining his pitch to brands. “Remember when you didn't believe that 10 years ago with social [media], and now you do? Why don't you [avoid] that same mistake? ...”


Forget AI Sentience Robots Can't Even Act Out Of Place! If They Do They Die

Robots are programmable devices, which take instructions to behave in a certain way. And this is how they come to execute the assigned function. To make them think or rather make them appear so, intrinsic motivation is programmed into them through learned behaviour. Joscha Bach, an AI researcher at Harvard, puts virtual robots into a “Minecraft” like a world filled with tasty but poisonous mushrooms and expects them to learn to avoid them. In the absence of an ‘intrinsically motivating’ database, the robots end up stuffing their mouths – a clue received for some other action for playing the game. This brings us to the question, of whether it is possible at all to develop robots with human-like consciousness a.k.a emotional intelligence, which can be the only differentiating factor between humans and intelligent robots. The argument is divided. While a segment of researchers believe that the AI systems and features are doing well with automation and pattern recognition, they are nowhere near the higher-order human-level intellectual capacities. On the other hand, entrepreneurs like Mikko Alasaarela, are confident in making robots with EQ on par with humans.


Turning the promise of AI into a reality for everyone and every industry

Today, AI is primarily the playground of an elite group of technology behemoths, companies like Google and Microsoft, which have invested billions in developing and using AI. If you look beyond those companies, AI is often underutilized in other industries, whether it be manufacturing, education, retail or healthcare. Vast amounts of data are generated by all these industries but AI is rarely used to analyze large sets of data and learn from the patterns and features that exist in the data. The question is, why? The answer is lack of access, understanding and skills. Most companies don’t have access to the sophisticated and costly compute resources required. And they don’t have access to the expensive and limited AI talent needed to use those resources correctly. These are the two restraints holding AI back from mainstream adoption. But they can be solved if we make AI easy to adopt and easy to use for instant value. Here are three ways we can create an Apple-like experience for AI and bridge the gap to a future in which AI helps businesses do more than they ever imagined.


Governance and progression of AI in the UK

Regulation of AI is vital, and responsibility lies both with those who develop it and those who deploy it. But according to Matt Hervey, head of AI at law firm Gowling WLG, the reality is that there is a lack of people who understand AI, and consequently a shortage of people who can develop regulation. The UK does have a range of existing legislative frameworks that should mitigate many of the potential harms of AI – such as laws regarding data protection, product liability negligence and fraud – but they lag behind the European Union (EU), where regulations are already being proposed to address AI systems specifically. UK companies doing business in the EU will most likely need to comply with EU law if it is at a higher level than our own. In this rapidly changing digital technology market, the challenge is always going to be the speed at which developments are made. With a real risk that AI innovation could get ahead of regulators, it is imperative that sensible guard rails are put in place to minimise harm. But also that frameworks are developed to allow the sale of beneficial AI products and services, such as autonomous vehicles.


Hybrid work: 4 ways to strengthen relationships

You don’t need a communal kitchen, sofa, or water cooler to catch up with your teammates, but you do need to get creative. When you start the first meeting every week, ask your team how they are: “How’s your week looking? Is it a busy one? What will be the most important or interesting days for you?” Better still: “Is there anything I can help you with?” Everyone loves to hear that one. By Friday, you can reflect on the week and ask about each other’s weekend plans. Also consider setting aside some time for an afternoon video social. Play a game, or have your team members prepare quickfire presentations about their hobbies or share other interesting details about themselves that their teammates wouldn’t necessarily know. Don’t feel like you always have to do something special – often just a virtual space where people can drop in and shoot the breeze is all that is needed to boost morale. No agenda can sometimes be the perfect agenda for the moment. ... One of the biggest annoyances for people working remotely is being left out of meetings. When you can’t physically scan the office to make sure everyone’s on the invite it’s easy to inadvertently overlook someone  



Quote for the day:

"Trust is one of the greatest gifts that can be given and we should take great care not to abuse it." -- Gordon Tredgold

Daily Tech Digest - June 18, 2022

Australian academics have created a mind-blowing concept that could serve as a proof-of-concept for the future in nanorobotics

DNA nanobots are nanometer-sized synthetic devices consisting of DNA and proteins. They are self-sufficient because DNA is a self-assembling technology. Not only does our natural DNA carry the code in which our biology is written, but it also understands when to execute. Previous research on the subject of DNA nanotechnology has shown that self-assembling devices capable of transferring DNA code, much like its natural counterparts, can be created. However, the new technology coming out of Australia is unlike anything we’ve encountered before. These nanobots are capable of transferring information other than DNA. In theory, they could transport every imaginable protein combination across a biological system. To put it another way, we should ultimately be able to instruct swarms of these nanobots to hunt down germs, viruses, and cancer cells inside our bodies. Each swarm member would carry a unique protein, and when they come across a harmful cell, they would assemble their proteins into a configuration meant to kill the threat. It’d be like having a swarm of superpowered killer robots swarming through your veins, hunting for monsters to eliminate.


Indian CISOs voice concerns on CERT-In’s new cybersecurity directives

Fal Ghancha, CISO at DSP Mutual Fund, says that the majority of the time—more than 70%—there are false-positive cybersecurity alerts of an incident. A six-hour reporting mandate could lead to an overkill of reporting. Because the timeline is very tight, people will become more aggressive and paranoid; they will report the incident in a rush and make wrong decisions, he says. Ghancha points out that the CERT-In directives have multiple granular actions, which today many organisations don’t follow at length. “The entire ecosystem will have to be integrated with a 24/7 monitoring system and skilled resource to ensure all the reports are seen, analysed, and reported as per the new guidelines,” Ghancha says. The extra work for security operations centers could be significant, he says. "Let's say today an organisation is monitoring its crown jewels only, which may be 20% of the total assets. Tomorrow, the organisation will need to monitor additional assets, which will be 50% to 60% higher than the current number.”


The Edison Ratio: What business and IT leaders get wrong about innovation

Good things come in threes. So, unfortunately, do not-so-good things, and that includes leaders who invert the Edison Ratio. This third group of Edison Ratio inverters is, if anything, the most dangerous — not because they’re malicious but because they’re having fun. These are the “idea cluster bombers.” An idea cluster bomber has brilliant ideas on a regular basis. Any one of their ideas is so brilliant they’re bursting with it. And so they tell someone to drop everything and go make it happen. Which is fine until the sun sets and rises again. That’s when they have another brilliant idea, and tell someone else to drop everything to make it happen. Brilliant! But not so brilliant that it can withstand the impact of Edison Ratio Inversion. An example: Imagine someone has a brilliant idea as they’re brewing coffee in preparation for starting off their workday. They spend, oh, I dunno … let’s say they spend the morning fleshing it out before Zooming a likely victim to work on it.


Minimum Viable Architecture in Practice: Creating a Home Insurance Chatbot

Our first step in creating an MVA is to make a basic set of choices about how the chatbot will work, sufficient to implement the Minimum Viable Product (MVP). In our example, the MVP has just the minimum features necessary to test the hypothesis that the chatbot can achieve the product goals we have set for it. If no one wants to use it, or if it will not meet their needs, we don’t want to continue building it. Therefore, we intend to deploy the MVP to a limited user base, with a simple menu-based interface, and we assume that the latency delays that may be created by accessing external data sources to gather data are acceptable to customers. As a result, we want to avoid incorporating more requirements—both functional requirements and quality attribute requirements (QARs)—than we need to validate our assumptions about the problem we are trying to solve. This results in an initial design which is shown below. If our MVP proves valuable, we will add capabilities to it and incrementally build its architecture in later steps. An MVP is a useful component of product development strategies, and unlike mere prototypes, an MVP is not intended to be “thrown away.”


‘Decentralization Proves To Be an Illusion,’ BIS Says

It’s not surprising that a champion of central banks would dismiss the concept of decentralization. But Chase Devens, research analyst at Messari, argues centralization is largely responsible for the current mess, noting that it was poor risk-management mixed with a lack of understanding of asset and protocol functions — such as Terra and stETH — that left large centralized players such as Celsius searching for liquidity. ... If DeFi lending wants to make it into the real-world economy, BIS economists suggested it must engage in “large-scale tokenisation of real-world assets” and rely less on crypto collateral. However, “developing its ability to gather information about borrowers,” would eventually lead the system to “gravitate towards greater centralization.” “The similarities between DeFi and legacy intermediaries are increasing, which has two important implications,” the report read. “The first is that elements of DeFi, mainly smart contracts and composability, could find their way into traditional finance. The second implication is that, once more, decentralization proves to be an illusion.”


Businesses brace for quantum computing disruption by end of decade

While the EY report warns about companies potentially losing out to rivals on the benefits of quantum computing, there are also dangers that organizations should be preparing for now, as Intel warned about during its Intel Vision conference last month. One of these is that quantum computers could be used to break current cryptographic algorithms, meaning that the confidentiality of both personal and enterprise data could be at risk. This is not a far-off threat, but something that organizations need to consider right now, according to Sridhar Iyengar, VP of Intel Labs and Director of Security and Privacy Research. "Adversaries could be harvesting encrypted data right now, so that they can decrypt it later when quantum computers are available. This could be sensitive data, such as your social security number or health records, which are required to be protected for a long period of time," Iyengar told us. Organizations may want to address threats like this by taking steps such as evaluating post-quantum cryptography algorithms and increasing the key sizes for current crypto algorithms like AES.


Artificial intelligence has reached a threshold. And physics can help it break new ground

“Neural networks try to find patterns in the data, but sometimes the patterns they find don’t obey the laws of physics, making the model it creates unreliable,” said Jordan Malof, assistant research professor of electrical and computer engineering at Duke. “By forcing the neural network to obey the laws of physics, we prevented it from finding relationships that may fit the data but aren’t actually true.” They did that by imposing upon the neural network a physics called a Lorentz model. This is a set of equations that describe how the intrinsic properties of a material resonate with an electromagnetic field. This, however, was no easy feat to achieve. “When you make a neural network more interpretable, which is in some sense what we’ve done here, it can be more challenging to fine tune,” said Omar Khatib, a postdoctoral researcher working in Padilla’s laboratory. “We definitely had a difficult time optimizing the training to learn the patterns.”


Test Data Management Concept, Process And Strategy

Generally, test data is constructed based on the test cases to be executed. For example in a System testing team, the end to end test scenario needs to be identified based on which the test data is designed. This could involve one or more applications to work. Say in a product which does workload management – it involves the management controller application, the middleware applications, the database applications all to function in co-relation with one another. The required test data for the same could be scattered. A thorough analysis of all the different kinds of data that may be required has to be made to ensure effective management. ... This is generally an extension from the previous step and enables to understand what the end-user or production scenario will be and what data is required for the same. Use that data and compare that data with the data that currently exists in the current test environment. Based on this new data may need to be created or modified. ... Based on the testing requirement in the current release cycle (where a release cycle can span over a long time), the test data may need to be altered or created as stated in the above point.


Sigma rules explained: When and how to use them to log events

Sigma rules are textual signatures written in YAML that make it possible to detect anomalies in your environment by monitoring log events that can be signs of suspicious activity and cyber threats. Developed by threat intel analysts Florian Roth and Thomas Patzke, Sigma is a generic signature format for use in SIEM systems. A prime advantage of using a standardized format like Sigma is that the rules are cross-platform and work across different security information and event management (SIEM) products. As such, defenders can use a “common language” to share detection rules with each other independent of their security arsenal. These Sigma rules can then be converted by SIEM products into their distinct, SIEM-specific language, while retaining the logic conveyed by the Sigma rule. Whereas among analysts, YARA rules are more commonly associated with identifying and classifying malware samples (files) using indicators of compromise (IOCs), Sigma rules focus on detecting log events that match the criteria outlined by the rule. Incident response professionals, for example, can use Sigma rules to specify some detection criteria. Any log entries matching this rule will trigger an alarm.


5 steps for writing architectural documentation in a code-focused culture

Don't let people lose faith in documentation by allowing it to become outdated and inaccurate. I've found that the closer you keep the docs to the implementation—including in the very same code repo, if applicable—the better chance they will stay up to date. When docs reside with the code, both can be updated in a single pull request instead of docs being an afterthought. Don't be afraid to build docs from the code if it makes sense. In any case, review the documentation periodically, prioritizing sections that document rapidly changing components. Keep your experiment going and iterate on your documentation as you would iterate on your architecture. Share your insights with teams who want or need to bootstrap their docs. Can architects write docs in your organization without feeling anxiety? Are they expected to? Do they want to? Hopefully, you will begin to see movement on this spectrum. Finally, remember that some of your teammates and leaders started out in that code-focused culture. 



Quote for the day:

"Leadership is a privilege to better the lives of others. It is not an opportunity to satisfy personal greed." -- Mwai Kibaki

Daily Tech Digest - June 17, 2022

Revisit Your Password Policies to Retain PCI Compliance

PCI version 4.0 requires multifactor authentication to be more widely used. Whereas multifactor authentication had previously been required for administrators who needed to access systems related to card holder data or processing, the new requirement mandates that multifactor authentication must be used for any account that has access to card holder data. The new standards also require user’s passwords to be changed every 12 months. Additionally, user’s passwords must be changed any time that an account is suspected to have been compromised. A third requirement is that PCI requires users to use strong passwords. While strong passwords have always been required by the PCI standard, the password requirements are more stringent than before. Passwords must now be at least 15 characters in length, and they must include numeric and alphanumeric characters. Additionally, user’s passwords must be compared against a list of passwords that are known to be compromised. Another requirement of PCI 0 is that organizations must review access privileges every six months to make sure that only those who specifically require access to card holder data are able to access that data.


Making the world a safer place with Microsoft Defender for individuals

Today’s sophisticated cyber threats require a modern approach to security. And this doesn’t apply only to enterprises or government entities—in recent years we’ve seen attacks increase exponentially against individuals. There are 921 password attacks every second.1 We’ve seen ransomware threats extending beyond their usual targets to go after small businesses and families. And we know, as bad actors become more and more sophisticated, we need to increase our personal defenses as well. That is why it is so important for us to protect your entire digital life, whether you are at home or work—threats don’t end when you walk out of the office or close your work laptop for the day. We need solutions that help keep you and your family secure in how you work, play, and live. That’s why I’m excited to share the availability of Microsoft Defender for individuals, a new online security application for Microsoft 365 Personal and Family subscribers. We believe every person and family should feel safe online. This is an exciting step in our journey to bring security to all and I’m thrilled to share with you more about this new app, available with features for you to try today.


Data Is Vulnerable to Quantum Computers That Don’t Exist Yet

To stay ahead of quantum computers, scientists around the world have spent the past two decades designing post-quantum cryptography (PQC) algorithms. These are based on new mathematical problems that both quantum and classical computers find difficult to solve. In January, the White House issued a memorandum on transitioning to quantum-resistant cryptography, underscoring that preparations for this transition should begin as soon as possible. However, after organizations such as the National Institute of Standards and Technology (NIST) help decide which PQC algorithms should become the new standards the world should adopt, there are billions of old and new devices that will need to get updated. Sandbox AQ notes that such efforts could take decades to implement. Although quantum computers are currently in their infancy, there are already attacks that can steal encrypted data with the intention to crack it once codebreaking quantum computers become a reality. Therefore, the Sandbox AQ argues that governments, businesses, and other major organizations must begin the shift toward PQC now.


Developer, Beware: The 3 API Security Risks You Can’t Overlook

By design, the majority of APIs send data from the data store to the client. Excessive data exposure results when the API has been designed to return large amounts of data to the client. Attackers can collect or harvest sensitive data from such API responses. For example, a group fitness app displays the home location of the group’s participants. The locations are displayed on a map using the latitude and longitude of each athlete. A well-designed API is intended to return only the latitude and longitude of each athlete. Conversely, a poorly designed API returns user information about each athlete, including their full name, address, email, phone number, latitude and longitude, and more. This is an example of excessive data exposure as the API is returning more data than it was designed to do. This might occur when a poorly designed API pulls a record from the database and returns it to the client in its entirety, exposing all the data in the file. In this situation, the true business use case was not fully understood during development.


Apple finally embraces open source

Apple is open-sourcing a reference PyTorch implementation of the Transformer architecture to help developers deploy Transformer models on Apple devices. In 2017, Google launched the Transformers models. Since then, it has become the model of choice for natural language processing (NLP) problems. ... As a company, Apple behaves like a cult. Nobody knows what goes inside Apple’s four walls. For the common man, Apple is a consumer electronics firm unlike tech giants such as Google or Microsoft. Google, for example, is seen as a leader in AI, with top AI talents working for the company and has released numerous research papers over the years. Google also owns Deepmind, another company leading in AI research. Apple is struggling with recruiting top AI talents, and for good reasons. “Apple with its top-five rank employer brand image is currently having difficulty recruiting top AI talent. In fact, in order to let potential recruits see some of the exciting machine-learning work that is occurring at Apple, it recently had to alter its incredibly secretive culture and to offer a publicly visible Apple Machine Learning Journal,” said Dr author John Sullivan.


Early adopters position themselves for quantum advantage

Perhaps most significant, however, is funding for a series of collaborative projects aimed at demonstrating specific applications for today’s quantum computers. Following a call for proposals in the autumn, for each successful bid the NQCC will first work with the project team to analyse the use case, assess the requirements, and determine whether the application can be usefully tackled with current technologies. “The next stage would be to identify appropriate algorithms or develop new ones, and then run them on a physical quantum computer,” says Decaroli. “We can then benchmark the results against classical solutions and potentially across different quantum-computing platforms.” One crucial partner in the SparQ programme is Oxford Quantum Circuits (OQC), the only UK company to offer cloud-based access to a quantum computer. Its latest eight-qubit processor, named “Lucy” after the pioneering quantum physicist Lucy Mensing, was released on Amazon Web Services in February this year. “We are looking forward to working with end users in different industry sectors to provide access to our hardware,” commented Ilana Wisby, CEO of OQC.


How decentralization and Web3 will impact the enterprise

For one, over time, Web3 will almost certainly become a vital approach to the way our IT systems work. Decentralization is now a significant industry trend that will be insisted on by a growing number of tech consumers and businesses as well. Instead of storing information in our own databases and running code in parts of the cloud that we pay for or otherwise control, businesses will have to get used to relying on Web3 resources (data, compute, etc.) and sharing more of that control. Much of the important data we need to run our businesses will increasingly be kept in more private and protected places, stored in blockchain and other types of distributed ledgers. A rising share of our applications over time will be more akin to open source projects and run using smart contracts that all stakeholders can transparently view, verify, and agree to. Even our businesses will have strange new subsidiaries that are actually embodied entirely in code and run automatically on their own, using digital inputs from stakeholders. And this is just the beginning. The cryptographic systems and immutable transaction ledgers of Web3 have now stood enough of the test of time to prove out and show the way.


Blockchain's potential: How AI can change the decentralized ledger

When asked whether AI is too nascent a technology to have any sort of impact on the real world, he stated that like most tech paradigms including AI, quantum computing and even blockchain, these ideas are still in their early stages of adoption. He likened the situation to the Web2 boom of the 90s, where people are only now beginning to realize the need for high-quality data to train an engine. Furthermore, he highlighted that there are already several everyday use cases for AI that most people take for granted in their everyday lives. “We have AI algorithms that talk to us on our phones and home automation systems that track social sentiment, predict cyberattacks, etc.,” Krishnakumar stated. Ahmed Ismail, CEO and president of Fluid — an AI quant-based financial platform — pointed out that there are many instances of AI benefitting blockchain. A perfect example of this combination, per Ismail, are crypto liquidity aggregators that use a subset of AI and machine learning to conduct deep data analysis, provide price predictions and offer optimized trading strategies to identify current/future market phenomena


We don’t need another infosec hero

Instead of thinking of ourselves as heroes—we aren’t Wonder Woman, or Batman, or Superman—it’s time to think of ourselves as sidekicks. On a good day, we help someone else make wiser risk choices, and those choices result in more profitable outcomes for everyone. But it is someone else who is the hero; we just hold their cape and refill their utility pouch. How do we do that? It begins with some humility. Most people in our profession work in cost centers. To the rest of the company, we are a drag on the business, and while we like to talk about business enablement, our first goal has to be removing the business impediment we’ve become. Are you responsible for product security? Engage the software architects who write the code and teach them how to do their own safety and security reviews earlier in their process.  ... No matter what part of the business you support, start learning what they need to do to get the job done. Identify opportunities where you can get out of their way first, and then look for ways to help improve their processes to be faster and safer.


Entering the metaverse: How companies can take their first virtual steps

If the virtual world experiment is successful, it will be because of superior immersivity. Concerts, movies, sporting events and consumer experiences must offer interactivity and wholistic engagement that makes the real world appear dull and lacking in possibilities by comparison. While entertainment companies will more easily master the metaverse experience offered to audiences, brands and businesses in the vast majority of other industries will likely struggle to conceptualize and develop the level of immersivity that will be required to be effective. Healthcare, education and financial services could all prosper from virtual properties and offerings – medical professionals seeing patients and patients building communities of support, classrooms that are not confined to textbooks but bring subject matter to life for greater curiosity and stock markets with available real-time multidimensional metrics that make Bloomberg terminals appear outdated. These virtual theme parks of consumerism and participation allow for brand reinvention, offer the possibility for novel sources of revenue and obviously skew to a younger audience that may not have yet come across or interacted with these same brands in the real world.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis