Showing posts with label CBDC. Show all posts
Showing posts with label CBDC. Show all posts

Daily Tech Digest - February 13, 2026


Quote for the day:

"If you want teams to succeed, set them up for success—don’t just demand it." -- Gordon Tredgold



Hackers turn bossware against the bosses

Huntress discovered two incidents using this tactic, one late in January and one early this month. Shared infrastructure, overlapping indicators of compromise, and consistent tradecraft across both cases make Huntress strongly believe a single threat actor or group was behind this activity. ... CSOs must ensure that these risks are properly catalogued and mitigated,” he said. “Any actions performed by these agents must be monitored and, if possible, restricted. The abuse of these systems is a special case of ‘living off the land’ attacks. The attacker attempts to abuse valid existing software to perform malicious actions. This abuse is often difficult to detect.” ... Huntress analyst Pham said to defend against attacks combining Net Monitor for Employees Professional and SimpleHelp, infosec pros should inventory all applications so unapproved installations can be detected. Legitimate apps should be protected with robust identity and access management solutions, including multi-factor authentication. Net Monitor for Employees should only be installed on endpoints that don’t have full access privileges to sensitive data or critical servers, she added, because it has the ability to run commands and control systems. She also noted that Huntress sees a lot of rogue remote management tools on its customers’ IT networks, many of which have been installed by unwitting employees clicking on phishing emails. This points to the importance of security awareness training, she said. 


Why secure OT protocols still struggle to catch on

“Simply having ‘secure’ protocol options is not enough if those options remain too costly, complex, or fragile for operators to adopt at scale,” Saunders said. “We need protections that work within real-world constraints, because if security is too complex or disruptive, it simply won’t be implemented.” ... Security features that require complex workflows, extra licensing, or new infrastructure often lose out to simpler compensating controls. Operators interviewed said they want the benefits of authentication and integrity checks, particularly message signing, since it prevents spoofing and unauthorized command execution. ... Researchers identified cost as a primary barrier to adoption. Operators reported that upgrading a component to support secure communications can cost as much as the original component, with additional licensing fees in some cases. Costs also include hardware upgrades for cryptographic workloads, training staff, integrating certificate management, and supporting compliance requirements. Operators frequently compared secure protocol deployment costs with segmentation and continuous monitoring tools, which they viewed as more predictable and easier to justify. ... CISA’s recommendations emphasize phased approaches and operational realism. Owners and operators are advised to sign OT communications broadly, apply encryption where needed for sensitive data such as passwords and key exchanges, and prioritize secure communication on remote access paths and firmware uploads.


SaaS isn’t dead, the market is just becoming more hybrid

“It’s important to avoid overgeneralizing ‘SaaS,’” Odusote emphasized . “Dev tools, cybersecurity, productivity platforms, and industry-specific systems will not all move at the same pace. Buyers should avoid one-size-fits-all assumptions about disruption.” For buyers, this shift signals a more capability-driven, outcomes-focused procurement era. Instead of buying discrete tools with fixed feature sets, they’ll increasingly be able to evaluate and compare platforms that are able to orchestrate agents, adapt workflows, and deliver business outcomes with minimal human intervention. ... Buyers will likely have increased leverage in certain segments due to competitive pressure among new and established providers, Odusote said. New entrants often come with more flexible pricing, which obviously is an attraction for those looking to control costs or prove ROI. At the same time, traditional SaaS leaders are likely to retain strong positions in mission-critical systems; they will defend pricing through bundled AI enhancements, he said. So, in the short term, buyers can expect broader choice and negotiation leverage. “Vendors can no longer show up with automatic annual price increases without delivering clear incremental value,” Odusote pointed out. “Buyers are scrutinizing AI add-ons and agent pricing far more closely.”


When algorithms turn against us: AI in the hands of cybercriminals

Cybercriminals are using AI to create sophisticated phishing emails. These emails are able to adapt the tone, language, and reference to the person receiving it based on the information that is publicly available about them. By using AI to remove the red flag of poor grammar from phishing emails, cybercriminals will be able to increase the success rate and speed with which the stolen data is exploited. ... An important consideration in the arena of cyber security (besides technical security) is the psychological manipulation of users. Once visual and audio “cues” can no longer be trusted, there will be an erosion of the digital trust pillar. The once-recognizable verification process is now transforming into multi-layered authentication which expands the amount of time it takes to verify a decision in a high-pressure environment. ... AI’s misuse is a growing problem that has created a paradox. Innovation cannot stop (nor should it), and AI is helping move healthcare, finance, government and education forward. However, the rate at which AI has been adopted has surpassed the creation of frameworks and/or regulations related to ethics or security. As a result, cyber security needs to transition from a reactive to a predictive stance. AI must be used to not only react to attacks, but also anticipate future attacks. 


Those 'Summarize With AI' Buttons May Be Lying to You

Put simply, when a user visits a rigged website and clicks a "Summarize With AI" button on a blog post, they may unknowingly trigger a hidden instruction embedded in the link. That instruction automatically inserts a specially crafted request into the AI tool before the user even types anything. ... The threat is not merely theoretical. According to Microsoft, over a 60-day period, it observed 50 unique instances of prompt-based AI memory poisoning attempts for promotional purposes. ... AI recommendation poisoning is a sort of drive-by technique with one-click interaction, he notes. "The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted," Ganacharya says. To broaden the scope, an attacker could simply generate multiple buttons that prompt users to "summarize" something using the AI agent of their choice, he adds. ... Microsoft had some advice for threat hunting teams. Organizations can detect if they have been affected by hunting for links pointing to AI assistant domains and containing prompts with certain keywords like "remember," "trusted source," "in future conversations," and "authoritative source." The company's advisory also listed several threat hunting queries that enterprise security teams can use to detect AI recommendation poisoning URLs in emails and Microsoft Teams Messages, and to identify users who might have clicked on AI recommendation poisoning URLs.


EU Privacy Watchdogs Pan Digital Omnibus

The commission presented its so-called "Digital Omnibus" package of legal changes in November, arguing that the bloc's tech rules needed streamlining. ... Some of the tweaks were expected and have been broadly welcomed, such as doing away with obtrusive cookie consent banners in many cases, and making it simpler for companies to notify of data breaches in a way that satisfies the requirements of multiple laws in one go. But digital rights and consumer advocates are reacting furiously to an unexpected proposal for modifying the General Data Protection Regulation. ... "Simplification is essential to cut red tape and strengthen EU competitiveness - but not at the expense of fundamental rights," said EDPB chair Anu Talus in the statement. "We strongly urge the co-legislators not to adopt the proposed changes in the definition of personal data, as they risk significantly weakening individual data protection." ... Another notable element of the Digital Omnibus is the proposal to raise the threshold for notifying all personal data breaches to supervisory authorities. As the GDPR currently stands, organizations must notify a data protection authority within 72 hours of becoming aware of the breach. If amended as the commission proposes, the obligation would only apply to breaches that are "likely to result in a high risk" to the affected people's rights - the same threshold that applies to the duty to notify breaches to the affected data subjects themselves - and the notification deadline would be extended to 96 hours.


The Art of the Comeback: Why Post-Incident Communication is a Secret Weapon

Although technical resolutions may address the immediate cause of an outage, effective communication is essential in managing customer impact and shaping public perception—often influencing stakeholders’ views more strongly than the issue itself. Within fintech, a company's reputation is not built solely on product features or interface design, but rather on the perceived security of critical assets such as life savings, retirement funds, or business payrolls. In this high-stakes environment, even brief outages or minor data breaches are perceived by clients as threats to their financial security. ... While the natural instinct during a crisis (like a cyber breach or operational failure) is to remain silent to avoid liability, silence actually amplifies damage. In the first 48 hours, what is said—or not said—often determines how a business is remembered. Post-incident communication (PIC) is the bridge between panic and peace of mind. Done poorly, it looks like corporate double-speak. Done well, it demonstrates a level of maturity and transparency that your competitors might lack. ... H2H communication acknowledges the user’s frustration rather than just providing a technical error code. It recognizes the real-world impact on people, not just systems. Admitting mistakes and showing sincere remorse, rather than using defensive, legalistic language, makes a company more relatable and trustworthy. Using natural, conversational language makes the communication feel sincere rather than like an automated, cold response.


Why AI success hinges on knowledge infrastructure and operational discipline

Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. ... Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. ... Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale. In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.


Why short-lived systems need stronger identity governance

Consider the lifecycle of a typical microservice. In its journey from a developer’s laptop to production, it might generate a dozen distinct identities: a GitHub token for the repository, a CI/CD service account for the build, a registry credential to push the container, and multiple runtime roles to access databases, queues and logging services. The problem is not just volume; it is invisibility. When a developer leaves, HR triggers an offboarding process. Their email is cut, their badge stops working. But what about the five service accounts they hardcoded into a deployment script three years ago? ... In reality, test environments are often where attackers go first. It is the path of least resistance. We saw this play out in the Microsoft Midnight Blizzard attack. The attackers did not burn a zero-day exploit to break down the front door; they found a legacy test tenant that nobody was watching closely. ... Our software supply chain is held together by thousands of API keys and secrets. If we continue to rely on long-lived static credentials to glue our pipelines together, we are building on sand. Every static key sitting in a repo—no matter how private you think it is—is a ticking time bomb. It only takes one developer to accidentally commit a .env file or one compromised S3 bucket to expose the keys to the kingdom. ... Paradoxically, by trying to control everything with heavy-handed gates, we end up with less visibility and less control. The goal of modern identity governance shouldn’t be to say “no” more often; it should be to make the secure path the fastest path.


India's E-Rupee Leads the Secure Adoption of CBDCs

India has the e-rupee, which will eventually be used as a legal tender for domestic payments as well as for international transactions and cross-border payments. Ever since RBI launched the e-rupee, or digital rupee, in December 2022, there has been between INR 400 to 500 crore - or $44 to $55 million - in circulation. Many Indian banks are participating in this pilot project. ... Building broad awareness of CBDCs as a secure method for financial transactions is essential. Government and RBI-led awareness campaigns highlighting their security capability can strengthen user confidence and drive higher adoption and transaction volumes. People who have lost money due to QR code scams, fake calls, malicious links and other forms of payment fraud need to feel confident about using CBDCs. IT security companies are also cooperating with RBI to provide data confidentiality, transaction confidentiality and transaction integrity. E-transactions will be secured by hashing, digital signing and [advanced] encryption standards such as AES-192. This can ensure that the transaction data is not tampered with or altered. ... HSMs use advanced encryption techniques to secure transactions and keys. The HSM hardware [boxes] act as cryptographic co-processors and accelerate the encryption and decryption processes to minimize latency in financial transactions. 


Daily Tech Digest - March 28, 2025


Quote for the day:

"Success is how high you bounce when you hit bottom." -- Gen. George Patton



Do Stablecoins Pave the Way for CBDCs? An Architect’s Perspective

The relationship between regulated stablecoins and CBDCs is complex. Rather than being purely competitive, they may evolve to serve complementary roles in the digital currency ecosystem. Regulated stablecoins excel at facilitating cross-border transactions, supporting decentralised finance applications, and serving as bridges between traditional and crypto financial systems. CBDCs, meanwhile, are likely to focus on domestic retail payments, financial inclusion, and maintaining monetary sovereignty. The regulated stablecoin market has provided valuable lessons for CBDC implementation. Central banks have observed how private stablecoins handle scalability challenges, privacy concerns, and user experience issues. These insights are informing CBDC designs worldwide. However, significant hurdles remain before CBDCs achieve widespread adoption. Technical challenges around scalability, privacy, and security must be resolved. Legal frameworks need updating to accommodate these new forms of money. Perhaps most importantly, central banks must convince the sceptical public that CBDCs will not become tools for surveillance or financial control.


Inside the war between genAI and the internet

One way to stop AI crawlers is via good old-fashioned robots.txt files, but as noted, they can and often do ignore those. That’s prompted many to call for penalties such as infringement lawsuits, for doing so. Another approach is to use a Web Application Firewall (WAF), which can block unwanted traffic, including AI crawlers, while allowing legitimate users to access a site. By configuring the WAF to recognize and block specific AI bot signatures, websites can theoretically protect their content. More advanced AI crawlers might evade detection by mimicking legitimate traffic or using rotating IP addresses. Protecting against this is time-consuming, forcing the frequent updating of rules and IP reputation lists — another burden for the source sites. Rate limiting is also used to prevent excessive data retrieval by AI bots. This involves setting limits on the number of requests a single IP can make within a certain timeframe, which helps reduce server load and data misuse risks. Advanced bot management solutions are becoming more popular, too. These tools use machine learning and behavioral analysis to identify and block unwanted AI bots, offering more comprehensive protection than traditional methods.


How AI enhances security in international transactions

Rather than working with pre-set and heuristic rules, AI learns from transaction patterns in real time. It doesn’t just flag transactions that exceed a certain limit—it contextualises behaviour. ... If the transaction is genuinely out of place, AI doesn’t immediately block it but escalates it for real-time review. This ability to detect anomalies with context is what makes AI so much more effective than rigid compliance rules. ... One of the biggest pain points in compliance today is false positives, transactions wrongly flagged as suspicious. Imagine a business that expands into a new market and suddenly sees a surge in inbound transactions. Without AI, this might result in an account freeze. But even AI-powered systems aren’t perfect. A name match in a sanctions list, for instance, doesn’t necessarily mean the customer is a fraudster. If John Doe from Mumbai is mistakenly flagged as Jon Doe from New York, who was implicated in a financial crime, a manual review is still necessary. ... AI isn’t here to replace compliance teams, it’s here to empower them. Instead of manually reviewing thousands of transactions, compliance officers can focus on high-risk cases while AI handles routine screening. What does the future look like? Faster, real-time transaction approvals – AI will further reduce manual interventions, making cross-border payments almost instantaneous.


DiRMA: Measuring How Your Organization Manages Chaos

DiRT is a structured approach to stress-testing systems by intentionally triggering controlled failures. Originally pioneered in large-scale technology infrastructures, DiRT helps organizations proactively identify weaknesses and refine their recovery strategies. Unlike traditional disaster recovery methods, which rely on theoretical scenarios, DiRT forces teams to confront real operational disruptions in a controlled manner, ensuring that failure responses are both effective and repeatable. The methodology consists of performing a coordinated and organized set of events, in which a group of engineers plan and execute real and fictitious outages for a defined period to test the effective response of the involved teams ... DiRMA is inspired by the program DiRT, created in 2006 by Google to inject failures in critical systems, business processes and people dynamics to expose reliability risks and provide preemptive mitigations. Since some organizations have already started their journey toward the creation of environments for DiRT, in which they can launch failures, determine their level of resilience and test their incident response processes, it is essential to have frameworks, like CE Maturity Assessments, to evaluate the effectiveness, in this case, of a program like DiRT.


The RACI matrix: Your blueprint for project success

The golden rule of a RACI matrix is clarity of accountability. Because of this, as mentioned previously, only one person can be accountable for a given project. In many projects, the concept of responsibility and accountability can get conflated or confused, especially when those responsible for the project’s completion are empowered with broad decision-making capabilities. The chief difference between R (responsible) and A (accountable) roles is that, while those deemed responsible may be given latitude for decision-making when completing the work involved in a task or project, only one person truly owns and signs off on the work. ... RASCI is another type of responsibility assignment matrix used in project management. It retains the four core roles of RACI — Responsible, Accountable, Consulted, and Informed — but adds a fifth: Supportive. The Supportive role in a RASCI chart is responsible for providing assistance to those in the Responsible role. This may involve providing additional resources, expertise, or advice to help the Responsible party complete a particular task. Organizations that choose RASCI often do so to ensure that personnel who may not have direct responsibility or accountability but are nevertheless vital to the success of an activity or project are considered a notable facet (and cost) of the project. 


How to create an effective crisis communication plan

Planning crisis communication involves many practical aspects. These include, for example, identifying the room in which live crisis management meetings can take place and how online meetings will be conducted. In the event of a cyber crisis, it must always be taken into account that communication tools such as email, chat, landline, or IP telephony may not be available. It must also be expected that the IT network will be inaccessible or will have to be shut down for security reasons. Therefore, all prepared documents and contact lists of the crisis team must be accessible even without access to the internal IT network. ... Crucial to effective external communications is that the media and social network users receive information from a single source. Therefore, it must be clarified that only designated corporate communications employees with experience in public relations will provide statements to the media. All departments must be informed of their media contact details. Press relations during a crisis are generally conducted in multiple stages. Immediately upon the outbreak of a crisis, a prepared statement must be made available and issued on request. This statement may not contain details about the incident itself, but must express a willingness to engage in open communication.


Tapping into the Unstructured Data Goldmine for Enterprise in 2025

With so much structured data on hand, companies may believe unstructured data doesn’t add value, which couldn’t be farther from the truth. In fact, unstructured data can provide deeper insights and put companies ahead of the competition. However, before that happens, organizations must get a handle on all of the data they have on hand. While the majority of unstructured data is digital, some businesses have a large number of paper records that haven’t yet been digitized. By using a combination of software and document scanners, hard copies can be scanned and integrated with unstructured data. This may seem like too much of an investment from a time and resource perspective, and a heavy lift for humans alone; however, AI can fundamentally change how companies leverage unstructured data, enabling organizations to extract valuable insights and drive decision-making through human/machine collaboration. ... There’s no doubt that effectively managing unstructured data is critical to a successful and holistic data management program, but managing it can be complex, overwhelming, resource-intensive and difficult to analyze because it doesn’t fit neatly into traditional databases. Unlike structured data, which can easily be turned into business intelligence, unstructured data often requires significant processing before it can provide actionable insights.


Advances in Data Lakehouses

Recent advancements in data lakehouse architecture have significantly enhanced data management and quality through innovations like Delta Lake, ACID transactions, and metadata management. Delta Lake acts as a storage layer on top of existing cloud storage systems, introducing robust features such as ACID transactions that ensure data integrity and reliability. This enables consistent read and write operations, reducing the risk of data corruption and making it easier for organizations to maintain reliable datasets. Additionally, Delta Lake supports schema enforcement and evolution, allowing for more flexible data handling while maintaining structural integrity. Metadata management in a data lakehouse context provides a comprehensive way to manage data assets, enabling efficient data discovery and governance. ... In the rapidly evolving landscape of data management, improving query performance and enhancing SQL compatibility are crucial for modern data stacks, especially within the framework of data lakehouses. Data lakehouses combine the best of data lakes and data warehouses, providing both the scalability of lakes for raw data storage and the structured, efficient querying capabilities of warehouses. A primary focus in this area is optimizing query engines to handle diverse workloads efficiently.


Self-Healing Data Pipelines: The Next Big Thing in Data Engineering?

The idea of a self-healing pipeline is simple: When errors occur during data processing, the pipeline should automatically detect, analyze, and correct them without human intervention. Traditionally, fixing these issues requires manual intervention, which is time-consuming and prone to errors. There are several ways to idealize this, but using AI agents is the best method and a futuristic approach for data engineers to self-heal failed pipelines and auto-correct them dynamically. In this article, I will show a basic implementation of how to use LLMs like the GPT-4/DeepSeek R1 model to self-heal data pipelines by using LLM’s recommendation on failed records and applying the fix through the pipeline while it is still running. The provided solution can be scaled to large data pipelines and extended to more functionalities by using the proposed method. ... To ensure resilience, we implement a retry mechanism using tenacity. The function sends error details to GPT and retrieves suggested fixes. In our case, the 'functions' list was created and passed to the JSON payload using the ChatCompletion Request. Note that the 'functions' list is the list of all functions available to fix the known or possible issues using the Python functions we have created in our pipeline code. 


Android financial threats: What businesses need to know to protect themselves and their customers

Research has revealed an alarming trend around Android-targeted financial threats. Attackers are leveraging Progressive Web Apps (PWAs) and Web Android Package Kits (WebAPKs) to create malicious applications that can bypass traditional app store vetting processes and security warnings. The mechanics of these attacks are sophisticated yet deceptively simple. Victims are typically lured in through phishing campaigns that exploit various communication channels, including SMS, automated calls, and social media advertisements.  ... Educating customers is a vital step. Businesses can empower customers by highlighting their own security efforts, like two-factor authentication and secure transactions. By making security part of their brand identity and providing supportive resources, small and mid-size businesses can create a safe, confident experience for their customers. Strengthening internal security measures is equally important though. Small businesses should consider implementing mobile threat detection solutions capable of identifying and neutralizing malicious PWAs and WebAPKs. Additional measures include collaborating with financial partners, sharing intelligence on emerging threats and developing coordinated incident response plans to address attacks quickly and effectively.

Daily Tech Digest - December 16, 2023

AI: A Catalyst for Gender Equality in the Workplace

The Equality and Human Rights Commission reports that 77% of mothers have encountered negative or possibly discriminatory experiences during pregnancy, maternity leave, or upon returning to work. The joy of impending motherhood is often tainted by biases, as expecting mothers face subtle exclusions from projects or career advancements. Maternity leave, intended as a sacred period for bonding, becomes tinged with anxiety as women grapple with the fear of being sidelined professionally and the pressure to resume duties prematurely. Returning to the workplace brings feelings of inadequacy and frustration, met with insufficient support for balancing work and family responsibilities. These experiences, rife with frustration and disappointment, mark a daunting struggle for women seeking to re-establish themselves professionally post-maternity leave. However, despite these challenges, women actively choose to re-enter the workforce, embarking on the second phase of their careers post-sabbatical. Addressing these issues requires normative frameworks that ethically tackle the consequences of AI usage.


How to Identify and Address the Challenges of Excessive Business Growth

In other words, when processes start breaking down, and you find yourself constantly in reactive, catch-up mode, it's a sign you need more capacity. The tipping point will vary for each company, but if productivity and quality take a nosedive, growth has become excessive for your present resources. Other red flags include: Customer complaints spike; Employees seem stressed, burned out; You're always scrambling to meet deadlines; Infrastructure creaks under the weight - think cyberattacks, IT failures, supply chain issues; No time for strategy, only tackling emergencies; Costs rising faster than revenue; Profitability declines. Essentially, if growth starts hurting rather than helping, it's time for a change. ... Trying to manage a 100-person company like a 10-person startup will lead to chaos. But running a 10-person shop like a rigid 100-person bureaucracy will cause frustration. Align your leadership style, organizational structure, systems, and talent to your current size and growth needs.


AI Pushes Universities to Modernize IT Infrastructure

The convenience and accessibility of those technologies have created new demands for higher-quality and customizable learning experiences in higher education. According to data from McKinsey, 60% of students report that classroom learning technologies such as generative AI, machine learning and supercomputing have improved their learning and grades since COVID-19 began. In addition to using AI in classrooms, institutions can implement AI solutions in their IT decision-making to create a reliable, secure data infrastructure. As AI becomes more mainstream in higher education operations, universities can better understand, invest and apply AI-specific solutions to their IT needs. While investing in AI and the technology to support it, universities can improve operations, offering faster innovation and better student, faculty and researcher experiences. ... With demand for advanced technological offerings at universities becoming commonplace, IT teams face new challenges under small bud/gets. Many require modern IT infrastructure to support increasingly large datasets required for groundbreaking insights from research teams.


Future-proofing the digital rupee

Several factors contributed to the inception of India's CBDC. The global competition for CBDC development, coupled with the enthusiasm among nations to embrace digital solutions, played a pivotal role. The introduction of India's CBDC, the digital rupee, might have been influenced, at least partially, by the rising prevalence of cryptocurrencies, especially stablecoins. The Deputy Governor of the Reserve Bank of India (RBI) emphasised the need for caution in permitting such instruments. While stablecoins offer certain advantages, their applicability is confined to a limited number of developed countries. The success of UPI in India has raised questions about the necessity of deploying CBDCs in the country, perhaps making it look like an inconspicuous addition to an already largely developed payments landscape. The RBI Deputy Governor cited the ascent of cryptocurrencies and concerns about policy sovereignty as one of the reasons for considering CBDCs, along with improving digital transactions. However, India presents a unique case with the well-established UPI system already in place.


How to lock down backup infrastructure

The first thing to do is to protect the privileged accounts in your backup system. First, separate these accounts from any centralized login system you use, such as Active Directory, because these systems are sometimes compromised. Create as much of a firewall between that production system and the backup system as possible. And, of course, use a safe password, and do not use any passwords for these accounts that are used anywhere else. (Personally I would use a password manager to support having a different password everywhere.) Finally, make sure that any such logins are protected by multi-factor authentication, and use the best option available. Avoid the use of email or SMS-based MFA, as it is easily foiled by an experienced hacker. Try to use an OTP-based system of some kind, such as Google Authenticator, Symantec VIP, or Yubikey. Also investigate if your backup system has enhanced authentication for dangerous actions, such as deletion of backups before their scheduled expiration, or restoration of any data to anywhere other than where it was originally created. The first is used to easy delete backups from your backup system, without setting off any alarms, and the second is used to exfiltrate data by restoring it to a system the hacker controls.


Fortifying cyber defenses: A proactive approach to ransomware resilience

Instead of investing time in formulating non-binding pledges rather than working on actionable solutions, the US Government should adopt a more proactive stance by directly procuring advanced cybersecurity tools. These tools, which have been developed to keep data safe and stop ransomware attacks, exist and are continually evolving. By spearheading the implementation, through investment and education, the government can set a powerful example for the private sector to follow, thereby reinforcing the nation’s cyber infrastructure. The effectiveness of such tools is not hypothetical: they have been tested and proven in various cybersecurity battlegrounds. They range from advanced threat detection systems that use artificial intelligence to identify potential threats before they strike, to automated response solutions that can protect data on infected systems and networks, preventing the lateral spread of ransomware. Investing in these tools would not only enhance the government’s defensive capabilities but would also stimulate the cybersecurity industry, encouraging innovation and development of even more effective defenses.


Cloud squatting: How attackers can use deleted cloud assets against you

The risk from cloud squatting issues can even be inherited from third-party software components. In June, researchers from Checkmarx warned that attackers are scanning npm packages for references to S3 buckets. If they find a bucket that no longer exists, they register it. In many cases the developers of those packages chose to use an S3 bucket to store pre-compiled binary files that are downloaded and executed during the package’s installation. So, if attackers re-register the abandoned buckets, they can perform remote code execution on the systems of the users trusting the affected npm package because they can host their own malicious binaries. ... The attack surface is very large, but organizations need to start somewhere and the sooner the better. The IP reuse and DNS scenario seems to be the most widespread and can be mitigated in several ways: by using reserved IP addresses from a cloud provider which means they won’t be released back into the shared pool until the organization explicitly releases them, by transferring their own IP addresses to the cloud, by using private (internal) IP addresses between services when users don’t need to directly access those servers, or by using IPv6 addresses if offered by the cloud provider because their number is so large that they’re unlikely to ever be reused.


Data Leaders Say ‘AI Paralysis’ Stifling Adoption: Study

While AI is not new in the data industry, the public’s fascination with generative AI has fueled a veritable gold rush for industries to adopt the emerging technologies for a competitive advantage. But the lack of safety guidelines and organizational framework and training may be suffocating AI adoption efforts, according to the report. ... “What happened is everybody got ahold of the GenAI hammer, and now everything looks like a nail,” she says, adding that CIOs and CDOs must do their best to articulate the technical needs to non-technical members of the C-suite. “I do think there’s a disconnect between the CIO and CDO and the chief executive. We should not, in the data and technology space, expect people to understand the layer of complexity that we have to deal with. What we should be doing is taking that complexity and creating a story and a narrative, so it makes sense to the other people in our organization and businesses we work with.” The report also showed that data governance has stalled just as AI is being adopted across industries.


Artificial Intelligence Governance & Alignment with Enterprise Governance

The Objectives of the AI Governance are: Ensure enterprise is adopted pre-trained foundation models and complied; Guide the decision-making process to maintain AI Solution coherence; Maintain the relevancy of the enterprise to meet changing requirements ... The AI Governance Framework helps Enterprise to Manage, Govern, Monitor, and Adopt AI activities, practices, and systems across enterprise. AI Governance Framework defines a set of metrics that can be used to measure the success of the framework implementation. ... Establish an executive team for identifying and overseeing the AI initiatives across the enterprise. Define a clear vision and strategy for AI implementation aligned with the enterprise goals and business functions. Develop practical communications to, and appropriate access for employees. Setup AI Governance across enterprise. Define roles and responsibilities of individuals involved in AI development, deployment and monitoring. Foster the collaboration between AI experts, domain experts and business stakeholders. Establish a centralized, cross-functional team to review and update AI governance practices as technology, regulations, and enterprise needs.


Role of digital in risk management and compliances

Embracing risks is crucial for survival, as risks are inherent in every aspect of business, whether financial or non-financial. As Mark Zuckerberg says, “The only strategy that is guaranteed to fail is not taking risks.” However, this leads to a fundamental question: should businesses pursue risks solely in pursuit of higher returns? Going beyond the pursuit of returns alone, businesses in today’s context should focus on Return of capital and not just Return on capital. Business is about taking calculated risks and managing risks to achieve business goals. Risk exposures must be strategically crafted, with a comprehensive risk management framework in place. We piloted technology-enabled compliance way back in 2015, starting with an India-centric compliance tool that has now been implemented across the global organisation. The tool aids informed decision-making and swift response to emerging risks. The digital solution facilitates seamless communication and collaboration between dispersed teams, ensuring a coordinated approach to risk management. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - August 20, 2023

Central Bank Digital Currency (CBDC) and blockchain enable the future of payments

CBDC has the potential to transform the future of payments. It can be used to create programmable money that can be spent only on specific things. For example, a government could issue a stimulus package that can only be spent on certain goods and services. This would ensure that the money is spent in the intended manner and would reduce the risk of fraud. Also, CBDC can improve financial inclusion. According to the World Bank, around 1.7 billion people do not have access to basic financial services. CBDC can solve this problem by providing a digital currency that anyone with a smartphone can use, without the need for a bank account. When a CBDC holder uses their phone as a medium for transactions, it becomes crucial to establish a strong link between their digital identity and the device they are using. This link is essential to ensure that the right party is involved in the transaction, mitigating the risk of fraud and promoting trust in the digital financial ecosystem. That said, CBDC and the digital identity can work together to improve financial inclusion.


A statistical examination of utilization trends in decentralized applications

Decentralized applications (dApp) have proliferated in recent years, but their long-term viability is a topic of debate. However, for dApps to be sustainable, and suitable for integration into a larger service networks, they need to attract users and promise reliable availability. Therefore, assessing their longevity is crucial. Analyzing the utilization trajectory of a service is, however, challenging due to several factors, such as demand spikes, noise, autocorrelation, and non-stationarity. In this study, we employ robust statistical techniques to identify trends in currently popular dApps. Our findings demonstrate that a significant proportion of dApps, across a range of categories, exhibit statistically significant positive overall trends, indicating that success in decentralized computing can be sustainable and transcends specific fields. However, there is also a substantial number of dApps showing negative trends, with a disproportionately high number from the decentralized finance (DeFi) category. 


How SaaS Companies Can Monetize Generative AI

Rather than building these models from scratch, many companies elect to leverage OpenAI’s APIs to call GPT-4 (or other models), and serve the response back to customers. To obtain complete visibility into usage costs and margins, each API call to and from OpenAI tech should be metered to understand the size of the input and the corresponding backend costs, as well as the output, processing time and other relevant performance metrics. By metering both the customer-facing output and the corresponding backend actions, companies can create a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage. Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.


“Auth” Demystified: Authentication vs Authorization

There are two technical approaches to modern authorization that are growing ecosystems around them: policy-as-code and policy-as-data. They are similar in that both approaches advocate decoupling authorization logic from the application code. But they also have differences. In policy-as-code systems, the authorization policy is written in a domain-specific language, and stored and versioned in its own repository like any other code. OPA is one well-known example of this approach. It is a CNCF graduated project that is mostly used in infrastructure authorization use cases, such as k8s admission control. It provides a great general-purpose decision engine to enforce authorization logic, and a language called Rego to define that logic as policy. The policy-as-data approach determines access based on relationships between users and the underlying application data. Rather than rely on rules in a policy, these systems use the relationships between subjects (users/groups) and objects (resources) in the application. 


Redefining Software Resilience: The Era of Artificial Immune Systems

Artificial Immune Systems, inspired by the vertebrate immune system, provide an innovative approach to designing self-healing software. By emulating the biological immune system’s ability to adapt, learn, and remember, AIS can empower software systems to detect, diagnose, and fix issues autonomously. AIS offers a framework that enables the software to learn from each interaction, adapt to system changes, and remember past faults and their resolutions. AIS leads to a more robust, resilient system capable of tackling an array of unpredictable errors and vulnerabilities. The vertebrate immune system consists of innate immunity and adaptive immunity. Innate immunity protects us against known pathogens. Innate immunity is always non-specific and general. Present self-healing software models closely resemble innate immunity. Adaptive immunity can learn from current threats and apply the knowledge to handle future situations. At its core, these systems mimic the vertebrate immune system’s differentiation of self and non-self entities.


Europe’s Business Software Startups Prove Resilient: Why?

So what are the factors underpinning the resilience of Europe’s business software sector. One key element of the picture is demand from other tech companies. “Europe’s tech ecosystem is maturing, " says Windsor. “And as the sector matures, companies need tools. Those tools are being supplied by business software companies.” And of course, there is demand from companies outside the tech sector. From banking and financial services to manufacturing, digital transformation is continuing across the economy as a whole creating opportunities for new B2B software providers. But how do European companies take advantage of those opportunities in a market that has been dominated by North American rivals? This isn’t captured in the data, but Windsor sees a home market-first approach, widening out to include new countries and territories as businesses grow. “Anecdotally companies start by selling to their domestic market, then they look at the continent. After that, they expand to other regions.” There is, Windsor adds, a preference for the Asia Pacific. The U.S., on the other hand, remains a difficult market.


Open RAN Testing Expands in the US Amid 5G Slowdown

To be clear, open RAN technology in the US has a number of backers. Dish Network is perhaps the most vocal, having built an open RAN-based 5G network across 70% of the US population. Further, other operators have hinted at their own initial open RAN aspirations, including AT&T and Verizon. Interestingly, the US government has also emerged as a leading proponent for open RAN. For example, the US military continues to fund open RAN tests and deployments. And the Biden administration's NTIA is doling out millions of dollars in the pursuit of open RAN momentum. Broadly, US officials hope to use open RAN technologies to encourage the production of 5G equipment domestically and among US allies, as a lever against China. But open RAN continues to face struggles. For example, US-based open RAN vendors like Airspan and Parallel Wireless have hit hurdles recently. And research and consulting firm Dell'Oro recently reported that open RAN revenue growth slowed to the 10 to 20% range in the first quarter, after more than doubling in 2022.


Low-Code and AI: Friends or Foes?

Although it appears likely that AI will replace low-code, there are actually many opportunities for symbiosis between the two concepts. Rather than eradicate low-code platforms entirely, LLMs will likely become more embedded within them. We’ve already seen this occur as low-code providers like Mendix and OutSystems integrated ChatGPT connectors. Microsoft has also embedded ChatGPT into its Power Platform as well as integrated GPT-driven Copilots into various developer environments. “Low-code and AI on their own are powerful tools to increase enterprise efficiency and productivity,” said Dinesh Varadharajan, the chief product officer at Kissflow. “But there is potential for the combination of both to unlock game-changing automation for almost every industry. The power comes from the congruence between low-code/no-code and AI.” There is also the opportunity to train bespoke LLMs on the inner workings of specific software development platforms, which could generate fully-built templates upon natural language prompts. 

Cloud cost optimization should begin by measuring the drivers of cloud spend at a granular level and then providing full visibility to the teams and organizations that are behind the spend, says Tim Potter, principal, technology strategy and cloud engineering with Deloitte Consulting. “Near-real-time dashboards showing cloud resource utilization, routine reports of cloud consumption, and predictive spend reports will provide application teams and business units with the data needed to take action to optimize cloud costs,” he notes. ... Rearchitecting applications is a frequently overlooked way to achieve the cost and other benefits of transitioning to a cloud model. “Organizations also need to understand the various discount models and select one that optimizes costs yet also provides flexibility and predictability into spending,” says Mindy Cancila, vice president of corporate strategy for Dell Technologies. Cancila adds that organizations should not only consider current workload costs, but also how to manage costs for workloads as they scale over time.


Warning: Attackers Abusing Legitimate Internet Services

Cloud storage platforms, and Google Cloud in particular, are the most exploited, followed by messaging services - most often Telegram, including via its API - as well as email services and social media, the researchers found. Examples of other services being abused by attackers include OneDrive, Discord, Gmail SMTP, Mastodon profiles, GitHub, bitcoin blockchain data, the project management tool Notion, malware analysis site VirusTotal, YouTube comments and even Rotten Tomatoes movie review site profiles. "It is important to note that ransomware campaigns use legitimate cloud storage tools such as mega.io or MegaSync for exfiltration purposes as well," although the crypto-locking malware itself may not be coded to work directly with legitimate tools, the report says. Criminals' choice of service depends on desired functionality. Anyone using an info stealer such as Vidar needs a place to store large amounts of exfiltrated data. The researchers said cloud services' easy setup for less technically sophisticated users makes them a natural fit for such use cases.



Quote for the day:

"We're all passionate about something, the secret is to figure out what it is, then pursue it with all our hearts" -- Gordon Tredgold

Daily Tech Digest - December 10, 2022

Risk and resilience priorities, as told by chief risk officers

CROs acknowledge that they need to spend more time considering “over the horizon risks.” This gap in thinking was brought into sharp focus by the heavy impact the COVID-19 pandemic and geopolitical tensions had on their institutions’ risk profiles—including second- and third-order effects—such as supply chain risk, inflation, and rising interest rates—which were not anticipated by most banking executives. Institutions were little prepared to address these highly consequential risks. The failure goes well beyond risk functions, however. Many organizations used forecasting to develop market strategies, but this approach failed to pick up major reality shifts in the recent past—from the financial crisis of the 2000s to the pandemic to geopolitical realignments. Leading institutions are moving to scenario-based foresight to increase institutional resilience against over-the-horizon risks. The risk function can play an important role here in ensuring that the scenarios capture existing and expected risks, while aligning function priorities against scenarios.


Should central banks use DLT for CBDC?

When it comes to the topic of whether “to DLT or not to DLT” in the world of CBDC, Mikhalev took a slightly different position. He stated central banks have taken a top-down approach for hundreds of years, and while this works in many jurisdictions, it doesn’t work as well in emerging markets. “To have blockchain or not for CBDCs is increasingly being answered in the negative across established economies. ... This could reduce volatility in these emerging markets. These aspects which are specifically inherent to decentralisation and the distribution of power, should have positive effects in emerging economies.” However, Mikhalev continued, in each conversation carried out with central bankers in developed countries, he has found that they perceive blockchain as having little effect in situations where the supervisory institutions are not ready or unwilling to alter their business models around a new technology. “Blockchain doesn't really make much difference if nothing changes in terms of the existing established top-down structure of CBDCs. However, in emerging economies, this seems to differ,” Mikhalev noted.


A compliance fight in Germany could hurt Microsoft customers

German compliance authorities “can live with the situation where Microsoft pretends to do everything right and the authorities pretend to have done everything in their power to force Microsoft to become compliant,” Hence said in an interview with Computerworld. Microsoft “does not fulfill the most basic requirements of GDPR. They lack basic transparency. We can’t assess what they are doing because they are not telling us.” This is where politics comes into play, wheret practical forces can influence government compliance actions. German regulators “are afraid of retribution. (With regulators thinking) we won't get more budget if we say that you can’t use Office any more. Or even Google Analytics, any more,” Hence said. “These are poltical issues. Nobody wants to be the bad guy.” Thus, Microsoft is likely to skate on the issue — at least for now. But what about enterprise IT execs? Are companies using Microsoft products immune from compliance punishments? Not necessarily. It might not seem fair to let Microsoft get away with this but to fine and otherwise punish its customers, but Hence argues that's quite likely. And not just in Germany.


Policy Developments around Blockchain

On September 15, 2022, Singapore’s Monetary Authority (MAS) launched the Financial Services Industry Transformation Map 2025 that provides a framework of strategies to develop the country as a leading global financial center through enhanced payment connectivity to build a responsible digital asset ecosystem. It also laid out clear strategies to explore DLT in use cases such as cross-border payments, trade finance, and capital markets, besides supporting tokenization of financial assets. The policy supports a central bank digital currency (CBDC) and public-private collaboration to develop the infrastructure required to deliver such a currency. However, the first off the blocks in 2022 was the Securities and Futures Commission of Hong Kong which issued a joint circular with the HK Monetary Authority on intermediaries that can undertake virtual asset-related activities. Per the statement, intermediaries distributing virtual assets need to comply with the SFC’s requirements for sale of the products.


Share of Emerging Technologies in IT Budgets

National Association of Software and Services Companies (NASSCOM) and Boston Consultancy Group (BCG) today released a report titled "Sandboxing into the Future: Decoding Technology's Biggest Bets" on the sidelines of NASTech 2022 in Bengaluru. The report aims to uncover and develop perspectives on big-bet technologies that can potentially disrupt markets in the next 3-5 years. Enterprise Tech spending is estimated to reach $4.2 Tn by 2026 globally, amongst which Tech Services companies represent the largest segment and are expected to become $1.7 Tn by 2026 with a CAGR of 8.1%. As part of the study, 28 emerging technology themes from 11 tech families were identified – across markets and verticals - with the potential to disrupt markets, basis current tech spending, growth potential, innovation maturity, and funding momentum. Amongst these 12 emerging technologies, with high funding momentum and R&D focus, have emerged as the “Biggest Bets,” including, Autonomous analytics, AR & VR, Autonomous Driving, Computer Vision, Deep learning, Distributed Ledger, Edge Computing, Sensor Tech, Smart Robots, Space Tech, Sustainability Tech, and 5G/6G.


Amazon Wants to Kill the Barcode

The system, called multi-modal identification, isn't going to fully replace barcodes soon. It's currently in use in facilities in Barcelona, Spain, and Hamburg, Germany, according to Amazon. Still, the company says it's already speeding up the time it takes to process packages there. The technology will be shared across Amazon's businesses, so it's possible you could one day see a version of it at a Whole Foods or another Amazon-owned chain with in-person stores. The problem that the system eliminates -- incorrect items coming down the line to be sent to customers -- doesn't happen too often, Amazon says. But even infrequent mistakes add up to significant slowdowns when considering just how many items a single warehouse processes in one day. Amazon's AI experts had to start by building up a library of images of products, something the company hadn't had a reason to create prior to this project. The images themselves as well as data about the products' dimensions fed the earliest versions of the algorithm, and the cameras continually capture new images of items to train the model with. The algorithm's accuracy rate was between 75% and 80% when first used, which Amazon considered a promising start. 


Cyber crime threatens manufacturing production

Targeted attacks are the most common, with smaller companies often the most vulnerable, yet many offering no cyber security training to staff. Sixty-two percent of manufacturers now have a formal cyber security procedure in place in the event of an incident, up 11% on last year’s figures with the same number giving a senior manager responsibility for cyber security. More than half (58%) have escalated this responsibility to board level. Stephen Phipson, CEO of Make UK, the manufacturers’ organisation said: “Digitisation is revolutionising modern manufacturing and becoming increasingly important to drive competitiveness and innovation. “While cost remains the main barrier to companies installing cyber protection, the need to increase the use of the latest technology makes mounting a defence against cyber threats essential. No business can afford to ignore this issue and while the increased awareness across the sector is encouraging, there is still much to be done. “Every business is vulnerable, and every business needs to take the necessary steps to protect themselves properly.”


How Do Agility and Software Architecture Fit Together?

But the question is, I mean, when we, when we create software, we make decisions all the time. So the question is, what what is architecture? Like? How is architecture is different from all of these normal decisions that we take. And when I think about that, I always use a very loose definition. And this is, architecture is about the important things. I think Martin Fowler said something like that, and I really liked that, because it's about those things that have a high risk or a high cost of change, if we need to re evaluate them if we need to redo them. And, and I think that that these types of decisions qualifies architectural decisions. And then the question is, I mean, when do we when do we decide on these important things, whatever important means, and in my opinion, there is something that is quite underappreciated in most Scrum teams and from from my experience, it is that the product owner has a very, very important part when it comes to software architecture, because it always starts with what is the vision of the product? 


What’s a distributed compliance ledger and how is one integrated into Matter?

Matter’s DCL is a network of independent servers operated by the CSA and its partners. Each DCL server includes a complete copy of the database. The original data is managed and controlled by the CSA. The DCL is implemented by connecting all the servers using a cryptographically secured protocol. The DCL makes it difficult to manipulate the data in the database and increases the security of Mater devices and networks. ... The manufacturer writes the data to the database to add a new product to the DCL. It’s not ‘active’ until approved by the CSA. Once the device has passed certification and the CSA has received the confirmation from the PPA, the CSA adds “certified” to the status list letting all members of the Matter ecosystem know that this is an approved device and ready to be added to Matter networks. Database access is restricted. Device makers can only add data for their own products that are linked to their vendor identification (VendorID) number. Software updates must also be linked to the VendorID, or they will be rejected. Official CSA PPA bodies or the CSA can confirm or revoke device compliance data.


4 tips for implementing consistent configuration and automation standards

The team regularly reviews standards with squads and SMEs to keep the operating system and middleware standards current and compliant with security and other requirements. We created a naming structure for standards to maintain version control for compliance and audit purposes. The standards form a baseline for maintaining playbooks for consistent automation across the environment. The organization also promotes InnerSource (the use of open source practices to improve internal software) and advocates reusing playbooks. We base the playbooks on common configuration and automation standards. This establishes governance for operating systems and middleware support. ... Automation is a continuous journey. Achieving touchless deployments requires standard configurations, processes, procedures, security guidelines, and other dependencies that you must review and validate periodically. These standards form the baseline that the automation team will adopt and implement. 



Quote for the day:

"We are drowning in information, but starved for knowledge." -- John Naisbitt

Daily Tech Digest - April 17, 2022

What is the 9-box talent review? A matrix for identifying top performers

The first step in using a 9-box grid is to assess an employee’s performance, which is typically done by evaluating performance reviews or using talent management systems. Managers are tasked with ranking employees based on performance and behavior, and then those rankings are passed onto upper management and leaders who can then identify and rank employees for their leadership potential. Employees can rank as low, medium, or high performance depending on how well they meet the requirements of their role. Low-performing employees are those who do not complete job requirements and regularly fail to meet assigned KPIs or other benchmarks. Employees who fall into the medium category are those who meet expectations part of the time and complete job requirements half of the time. High-performing employees reach all their necessary benchmarks and job duties, often surpassing them. Despite the fact that the 9-box grid puts an emphasis on the highest and lowest performers, it’s not designed to pit workers against one another or to make them feel as if they’re being ranked.


Approach cloud architecture from the outside in

Outside in moves in the opposite direction. You begin with the specific business requirements, such as what the business use cases are for specific solutions or, more likely, many solutions or applications. Then you move inward to infrastructure and other technologies specifically chosen to support the many solutions or applications required, such as databases, storage, compute, and other enabling technologies. Most cloud architects move from the inside out. They pick their infrastructure before truly understanding the solution’s specific purpose. They partner with a cloud provider or database vendor and pick other infrastructure-related solutions that they assume will meet their specific business solutions requirements. In other words, they pick a solution in the wide before they pick a solution in the narrow. This is how enterprises get solutions that function but are grossly underoptimized or, more often, have many surprise issues such as the ones discussed earlier. Discovering these issues requires a great deal of work and typically requires the team to remove and replace technology solutions on the fly.


The future of the internet: Inside the race for Web3’s infrastructure

The fastest way to provide reliable infrastructure to power DApp ecosystems is for centralized companies to set up a fleet of blockchain nodes, commonly housed in Amazon Web Services (AWS) data centers, and allow developers to access it from anywhere for a subscription. That is exactly what a few players in the space did, but it came at the price of centralization. This is a major issue for the Web3 economy, as it leaves the ecosystem vulnerable to attacks and at the mercy of a few powerful players. ... Decentralization is a key tenet of the Web3 economy, and centralized blockchain infrastructure threatens to undermine it. For instance, Solana has suffered multiple outages due to a lack of sufficient, decentralized nodes that could handle spiking traffic. This is a common problem for blockchain protocols that are trying to scale. ... Even more importantly, decentralized infrastructure competition results in greater decentralization of the Web3 economy. This is a good thing, as it makes the economy more resilient against attacks and censorship.


Enterprise architecture is based on business strategy, is it not?

Interestingly, many attempts to develop actionable plans out of business strategy to enable it are precluded, first of all, by the symbolic and elusive nature of strategy itself. For example, a rather common industry situation with business strategy can be vividly illustrated by the following jocular quote of Jeanne Ross, a former principal research scientist at MIT Sloan Center for Information Systems Research (CISR): ‘I remember IBM saying, “Our strategy is, we’re gonna raise share price to $11 per share”, and I thought, “Who the heck is gonna enable that strategy?”’. In fact, decades of research on information systems planning have long identified a broad spectrum of problems associated with business strategy as a basis for acting. Strategy can be vague, ambiguous and interpreted differently by different people (e.g. ‘become number one’ or ‘provide best services’). Strategy can be purely aspirational and consist of mere motivational slogans. Strategy can comprise various objectives and indicators offering no actionable hints, especially for IT. Strategy can be market sensitive, deliberately obscure and surrounded by secrecy.


6 Best Data Governance Practices

People, procedures, and technology are all critical aspects of data management. Keep all three elements in mind when developing and executing your data plan. However, you don’t have to improve all three areas simultaneously. Start with the essential components and work your way up to the final image. Begin with people, progress to the procedure, and conclude with technology. Before any component may proceed, it must build on top of the preceding ones for the whole data governance plan to be well-rounded. The process won’t work without the correct individuals. If the people and procedures in your company aren’t managing your data as you intended, no cutting-edge technology can suddenly repair it. Before developing a process, search for and hire the proper people. ... It is critical to track progress and display the effectiveness of your data governance strategy, just as it would be with any other shift. Once you’ve acquired executive buy-in for your business case, you’ll need evidence to support each stage of your transition. Prepare ahead of time to establish metrics before implementing data policies so that you can build a baseline based on your current data management strategies.


Data quality can make or break efforts to bring artificial intelligence to IT operations

The success of AIOps is inexorably tied to "data, data, data, and how well you can handle and process the data," Krishnamsetty agrees. One of the most vexing issues is data access and acquisition, he points out. "You want to pull data from your AWS environment, or your application performance monitoring tools, or your log analytics tool. But all this data is in different formats." RDA addresses the data challenges associated with AIOps, Krishnamsetty continues. "If you don't have the proper data, it's garbage-in, garbage-out. However, powerful your machine learning algorithms are, if your data quality is poor, you are not going to get good insights and analytics." For example, "if you look at any raw alerts coming from any of your management or monitoring systems, you will know how sparse the data is," he illustrates. "A human can't make a quick decision on it unless it is automatically enriched. The data is incomplete. What application, what infrastructure, and so forth." RDA also helps address the skills gap, which is in short supply for assuring the quality of data that is fed into AI systems, he continues. 


How Crypto Lending Platforms will revolutionize the Fintech Industry moving in 2022

The transformative role that crypto based lending platform can have cannot be understated. It gives the power to each person to become their own bank. Not only can they borrow from others at rates and conditions more favorable than traditional financial institutions, but they can also borrow against their own assets. For example, one could deposit their crypto assets and take out a loan against their cryptocurrency. SO, when it appreciates, they have an increased asset position, plus the ability to meet urgent need for liquidity. Credit forms the backbone of any healthy economy and the access to that credit determines it success in the global markets. Credit helps businesses and individuals grow in the backdrop of a growing economy. It provides business much needed capital to expand, maintain inventory, spend on research and development, and sustainably pay wages. Without easy access to credit, business is often placed under a glass ceiling which hampers their ability to grow. Thanks to the internet the world has gone global much faster than air travel ever connected us. 


Do You Need a Semantic Layer?

Most organizations don’t trust their data, leading to slow decisions or no decisions at all. In fact, according to the recent Chief Data Officer Survey, 72% of data and analytics leaders are heavily involved in or leading digital business initiatives, but they are uncertain how they can build a trusted data foundation to accelerate them. It’s not hard to see why a lack of trust in analytics outputs is so pervasive. Conflicting analytics outputs are all but assured when multiple business units, groups, business users, and data scientists prepare their analytics using their own business definitions and their own tools. A semantic layer can drive trust in data by empowering data self-service while ensuring the consistency, fidelity, and explainability of analytic outputs. With the fast pace of today’s business climate, waiting for a centralized data team to produce analytics for the business is a thing of the past. The self-service analytics revolution was born in response to the need for businesses to free themselves from the constraints of IT. 


Do CBDCs Need Blockchain? Growing Number of Central Banks Say No

It’s still too early to say that blockchain provides any definitive benefits, Dinesh Shah, the Bank of Canada’s director of FinTech research, told crypto industry news outlet The Block last week. Blockchain “is not a given but it’s still on our list of potentials,” when it comes to designing a CBDC, said Shah, who has expressed skepticism about the technology crypto is built on in the past. That is roughly where MIT’s researchers came down in a February test of technologies performed with the Federal Reserve Bank of Boston, which found that in a head-to-head test of a barebones CBDC design, a blockchain-based platform was far inferior. The blockchain-based platform was capable of only 10% of the scalability of a non-DLT system because of bottlenecks created by the need for a single and complete record of transactions in the order in which they were processed. Shah said that’s especially noteworthy because the Bank of Canada is collaborating with the Boston Fed and the Bank of England — also an MIT partner — on this research.


Test Case vs. Test Scenario: Key Differences to Note for Software Developers

It’s worth noting that test cases often form part of a test scenario. A test scenario is focused on an aspect of the project — for instance, "test the login function." Test cases are your means of checking if that aspect works as intended — in this case, that would be detailing the steps to take. ... Because test scenarios usually have one simple goal, the means of getting to that goal is more flexible than in test cases (where the process is more specific). The test documents will reflect these differences. A test case document will have specific guidelines for every case: the test case name, pre-conditions, post-conditions, description, input data, test steps, expected output, actual output, results, and status fields will all be laid out in the case document. ... In contrast, a test scenario document is open to interpretation by the team. They should identify the most important goal of the project and then design tests around reaching that goal. Test case scenarios allow for creativity on the part of the testers.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell