Showing posts with label maturity. Show all posts
Showing posts with label maturity. Show all posts

Daily Tech Digest - November 12, 2024

Researchers Focus In On ‘Lightcone Bound’ To Develop An Efficiency Benchmark For Quantum Computers

The researchers formulated this bound by first reinterpreting the quantum circuit mapping challenge through quantum information theory. They focused on the SWAP “uncomplexity,” the lowest number of SWAP operations needed, which they determined using graph theory and information geometry. By representing qubit interactions as density matrices, they applied concepts from network science to simplify circuit interactions. To establish the bound, in an interesting twist, the team employed a Penrose diagram — a tool from theoretical physics typically used to depict spacetime geometries — to visualize the paths required for minimal SWAP-gate application. They then compared their model against a brute-force method and IBM’s Qiskit compiler, with consistent results affirming that their bound offers a practical minimum SWAP requirement for near-term quantum circuits. The researchers acknowledge the lightcone model has some limitations that could be the focus of future work. For example, it assumes ideal conditions, such as a noiseless processor and indefinite parallelization, conditions not yet achievable with current quantum technology. The model also does not account for single-qubit gate interactions, focusing only on two-qubit operations, which limits its direct applicability for certain quantum circuits.


Evaluating your organization’s application risk management journey

One way CISOs can articulate application risk in financial terms is by linking security improvement efforts to measurable outcomes, like cost savings and reduced risk exposure. This means quantifying the potential financial fallout from security incidents and showing how preventative measures mitigate these costs. CISOs need to equip their teams with tools that will help them protect their business in the short and long term. A study we commissioned with Forrester found that putting application security measures in place could save average organization millions in terms of avoided breach costs. ... To keep application risk management a dynamic, continuous process, CISOs integrate security into every stage of software development. Instead of relying on periodic assessments, organisations should implement real-time risk analysis, continuous monitoring, and feedback mechanisms to enable teams to address vulnerabilities promptly as they arise, rather than waiting for scheduled evaluations. Incorporating automation can also play a key role in streamlining this process, enabling quicker remediation of identified risks. Building on this, creating a security-first mindset across the organisation – through training and clear communication – ensures risk management adapts to new threats, supporting both innovation and compliance.


How a Second Trump Presidency Could Shape the Data Center Industry

“We anticipate that the incoming administration will have a keen focus on AI and our nation’s ability to be the global leader in the space,” Andy Cvengros, managing director, co-lead of US data center markets for JLL, told Data Center Knowledge. He said to do that, the industry will need to solve the transmission delivery crisis and continue to increase generation capacity rapidly. This may include reactivating decommissioned coal and nuclear power plants, as well as commissioning more of them. “We also anticipate that state and federal governments will become much more active in enabling the utilities to proactively expand substations, procure long lead items and support key submarket expansion through planned developments,” Cvengros said. ... Despite the federal government’s likely hands-off approach, Harvey said he believes large corporations might support consistent, global standards – especially since European regulations are far stricter. “US companies would prefer a unified regulatory framework to avoid navigating a complex patchwork of rules across different regions,” he said. Still, Europe’s stronger regulatory stance on renewable power might lead some companies to prioritize US-based expansions, where subsidies and fewer regulations make operations more economically feasible.


Data Breaches are a Dime a Dozen: It’s Time for a New Cybersecurity Paradigm

The modern-day ‘stack’ includes many disparate technology layers—from physical and virtual servers to containers, Kubernetes clusters, DevOps dashboards, IoT, mobile platforms, cloud provider accounts, and, more recently, large language models for GenAI. This has created the perfect storm for threat actors, who are targeting the access and identity silos that significantly broaden the attack surface. The sheer volume of weekly breaches reported in the press underscores the importance of protecting the whole stack with Zero Trust principles. Too often, we see bad actors exploiting some long-lived, stale privilege that allows them to persist on a network and pivot to the part of a company’s infrastructure that houses the most sensitive data. ... Zero Trust access for modern infrastructure benefits from being coupled with a unified access mechanism that acts as a front-end to all the disparate infrastructure access protocols – a single control point for authentication and authorization. This provides visibility, auditing, enforcement of policies, and compliance with regulations, all in one place. These solutions already exist on the market, deployed by security-minded organizations. However, adoption is still in early days. 


AI’s math problem: FrontierMath benchmark shows how far technology still has to go

Mathematics, especially at the research level, is a unique domain for testing AI. Unlike natural language or image recognition, math requires precise, logical thinking, often over many steps. Each step in a proof or solution builds on the one before it, meaning that a single error can render the entire solution incorrect. “Mathematics offers a uniquely suitable sandbox for evaluating complex reasoning,” Epoch AI posted on X.com. “It requires creativity and extended chains of precise logic—often involving intricate proofs—that must be meticulously planned and executed, yet allows for objective verification of results.” This makes math an ideal testbed for AI’s reasoning capabilities. It’s not enough for the system to generate an answer—it has to understand the structure of the problem and navigate through multiple layers of logic to arrive at the correct solution. And unlike other domains, where evaluation can be subjective or noisy, math provides a clean, verifiable standard: either the problem is solved or it isn’t. But even with access to tools like Python, which allows AI models to write and run code to test hypotheses and verify intermediate results, the top models are still falling short.


Can Wasm replace containers?

One area where Wasm shines is edge computing. Here, Wasm’s lightweight, sandboxed nature makes it especially intriguing. “We need software isolation on the edge, but containers consume too many resources,” says Michael J. Yuan, founder of Second State and the Cloud Native Computing Foundation’s WasmEdge project. “Wasm can be used to isolate and manage software where containers are ‘too heavy.’” Whereas containers take up megabytes or gigabytes, Wasm modules take mere kilobytes or megabytes. Compared to containers, a .wasm file is smaller and agnostic to the runtime, notes Bailey Hayes, CTO of Cosmonic. “Wasm’s portability allows workloads to run across heterogeneous environments, such as cloud, edge, or even resource-constrained devices.” ... Wasm has a clear role in performance-critical workloads, including serverless functions and certain AI applications. “There are definitive applications where Wasm will be the first choice or be chosen over containers,” says Luke Wagner, distinguished engineer at Fastly, who notes that Wasm brings cost-savings and cold-start improvements to serverless-style workloads. “Wasm will be attractive for enterprises that don’t want to be locked into the current set of proprietary serverless offerings.”


Authentication Actions Boost Security and Customer Experience

Authentication actions can be used as effective tools for addressing the complex access scenarios organizations must manage and secure. They can be added to workflows to implement convenience and security measures after users have successfully proven their identity during the login process. ... When using authentication actions, first take some time to fully map out the customer journey you want to achieve, and most importantly, all of the possible variations of this journey. Think of your authentication requirements as a flowchart that you control. Start by mapping out your requirements for different users and how you want them to sign up and authenticate. Understand the trade-off between security and user experience. Consider using actions to enable a frictionless initial login with a simple authentication method. You can use step-up authentication as a technique that increases the level of assurance when the user needs to perform higher-privilege operations. You can also use actions to implement dynamic behavior per user. For instance, you can use an action that captures an identifier like an email to identify the user. Then you can use another action to look up the user’s preferred authentication method or methods to give each user a personalized experience.


How Businesses use Modern Development Platforms to Streamline Automation

APIs are essential for streamlining data flows between different systems. They enable various software applications to communicate with each other, automating data exchange and reducing manual input. For instance, integrating an API between a customer relationship management (CRM) system and an email marketing platform can automatically sync contact information and campaign data. This not only saves time, but also minimizes errors that can occur with manual data entry. ... Workflow automation tools are designed to streamline business processes by automating repetitive steps and ensuring smooth transitions between tasks. These tools help businesses design and manage workflows, automate task assignments, and monitor progress. For example, tools like Asana and Monday.com allow teams to automate task notifications, approvals, and status updates. By automating these processes, businesses can improve collaboration and reduce the risk of missed deadlines or overlooked tasks. Workflow automation tools also provide valuable insights into process performance, enabling companies to identify bottlenecks and optimize their operations. This leads to more efficient workflows and better resource management.

“Micromanagement is one of the fastest ways to destroy IT culture,” says Jay Ferro, EVP and chief information, technology, and product officer at Clario. “When CIOs don’t trust their teams to make decisions or constantly hover over every detail, it stifles creativity and innovation. High-performing professionals crave autonomy; if they feel suffocated by micromanagement, they’ll either disengage or leave for an environment where they’re empowered to do their best work.” ... One of the most challenging issues facing transformational CIOs is the overwhelming demand to take on more initiatives, deliver to greater scope, or accept challenging deadlines. Overcommitting to what IT can reasonably accomplish is an issue, but what kills IT culture is when the CIO leaves program leaders defenseless when stakeholders are frustrated or when executive detractors roadblock progress. “It demoralizes IT when there is a lack of direction, no IT strategy, and the CIO says yes to everything the business asks for regardless of whether the IT team has the capacity,” says Martin Davis, managing partner at Dunelm Associates. “But it totally kills IT culture when the CIO doesn’t shield teams from angry or disappointed business senior management and stakeholders.”


Understanding Data Governance Maturity: An In-Depth Exploration

Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examines processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function. ... Many maturity models are complex and may require significant time and resources to implement. Organizations need to ensure they have the capacity to effectively handle the complexity involved in using these models. Additionally, some data governance maturity models do not address the relevant related data management functions, such as metadata management, data quality management, or data security to a sufficient level of detail for some organizations. ... Implementing changes based on maturity model assessments can face resistance; organizational culture may not accept the views discovered in an assessment. Adopting and sustaining effective change management strategies and choosing a maturity model carefully can help overcome resistance and ensure successful implementation.



Quote for the day:

"Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah

Daily Tech Digest - July 09, 2024

AI stack attack: Navigating the generative tech maze

Successful integration often depends on having a solid foundation of data and processing capabilities. “Do you have a real-time system? Do you have stream processing? Do you have batch processing capabilities?” asks Intuit’s Srivastava. These underlying systems form the backbone upon which advanced AI capabilities can be built. For many organizations, the challenge lies in connecting AI systems with diverse and often siloed data sources. Illumex has focused on this problem, developing solutions that can work with existing data infrastructures. “We can actually connect to the data where it is. We don’t need them to move that data,” explains Tokarev Sela. This approach allows enterprises to leverage their existing data assets without requiring extensive restructuring. Integration challenges extend beyond just data connectivity. ... Security integration is another crucial consideration. As AI systems often deal with sensitive data and make important decisions, they must be incorporated into existing security frameworks and comply with organizational policies and regulatory requirements.


How to Architect Software for a Greener Future

Firstly, it’s a time shift, moving to a greener time. You can use burstable or flexible instances to achieve this. It’s essentially a sophisticated scheduling problem, akin to looking at a forecast to determine when the grid will be greenest—or conversely, how to avoid peak dirty periods. There are various methods to facilitate this on the operational side. Naturally, this strategy should apply primarily to non-demanding workloads. ... Another carbon-aware action you can take is location shifting—moving your workload to a greener location. This approach isn’t always feasible but works well when network costs are low, and privacy considerations allow. ... Resiliency is another significant factor. Many green practices, like autoscaling, improve software resilience by adapting to demand variability. Carbon awareness actions also serve to future-proof your software for a post-energy transition world, where considerations like carbon caps and budgets may become commonplace. Establishing mechanisms now prepares your software for future regulatory and environmental challenges.


Evaluating board maturity: essential steps for advanced governance

Most boards lack a firm grasp of fundamental governance principles. I'd go so far as to say that 8 or 9 out of 10 boards could be described this way. Your average board director is intelligent and respected within their communities. But they often don't receive meaningful governance training. Instead, they follow established board norms without questioning them, which can lead to significant governance failures. Consider Enron, Wells Fargo, Volkswagen AG, Theranos, and, recently, Boeing—all have boards filled with recognized experts. However inadequate oversight caused or allowed them to make serious and damaging errors. This is most starkly illustrated by Barney Frank, co-author of the Dodd-Frank Act (passed following the 2008 financial crisis) and a board member of Silicon Valley Bank while it collapsed. Having brilliant board members doesn't guarantee effective governance. The point is that, for different reasons, consultants and experts can 'misread' where a board is at. Frankly, this is most often due to just being lazy. But sometimes it is due to just not being clear about what to look for.


Mastering Serverless Debugging

Feature flags allow you to enable or disable parts of your application without deploying new code. This can be invaluable for isolating issues in a live environment. By toggling specific features on or off, you can narrow down the problematic areas and observe the application’s behavior under different configurations. Implementing feature flags involves adding conditional checks in your code that control the execution of specific features based on the flag’s status. Monitoring the application with different flag settings helps identify the source of bugs and allows you to test fixes without affecting the entire user base. ... Logging is one of the most common and essential tools for debugging serverless applications. I wrote and spoke a lot about logging in the past. By logging all relevant data points, including inputs and outputs of your functions, you can trace the flow of execution and identify where things go wrong. However, excessive logging can increase costs, as serverless billing is often based on execution time and resources used. It’s important to strike a balance between sufficient logging and cost efficiency. 


Implementing Data Fabric: 7 Key Steps

As businesses generate and collect vast amounts of data from diverse sources, including cloud services, mobile applications, and IoT devices, the challenge of managing, processing, and leveraging this data efficiently becomes increasingly critical. Data fabric emerges as a holistic approach to address these challenges by providing a unified architecture that integrates different data management processes across various environments. This innovative framework enables seamless data access, sharing, and analysis across the organization irrespective of where the data resides – be it on-premises or in multi-cloud environments. The significance of data fabric lies in its ability to break down silos and foster a collaborative environment where information is easily accessible and actionable insights can be derived. By implementing a robust data fabric strategy, businesses can enhance their operational efficiency, drive innovation, and create personalized customer experiences. Implementing a data fabric strategy involves a comprehensive approach that integrates various Data Management and processing disciplines across an organization.


Empowering Self-Service Users in the Digital Age

Ultimately, portals must strike the balance between freedom and control, which can be achieved by ensuring flexibility with role-based access control. Granting end users the freedom to deploy within a secure framework of predefined permissions creates an environment ripe for innovation within a robustly protected environment. This means users can explore, experiment and innovate without concerns about security boundaries or unnecessary hurdles. But of course, as with any project, organizations can’t afford to build something and consider that job done. Measuring success is ongoing. Metrics such as how often the portal is accessed, who uses what, which service catalogs are used and how the portal usage should be tracked, along with other relevant data will help point to any areas that need improvement. It is also important to remember that it is collaborative work between the platform team and end users. And in technology, there is always room for improvement. For instance, recent advances in AI/ML could soon be leveraged to analyze previously inaccessible datasets and generate smarter and faster decision-making.


Desperate for power, AI hosts turn to nuclear industry

As opposed to adding new green energy to meet AI’s power demands, tech companies are seeking power from existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals, according The Wall Street Journal and other sources. According to sources cited by the WSJ, the owners of about one-third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom. ... “The power companies are having a real problem meeting the demands now,” Gold said. “To build new plants, you’ve got to go through all kinds of hoops. That’s why there’s a power plant shortage now in the country. When we get a really hot day in this country, you see brownouts.” The available energy could go to the highest bidder. Ironically, though, the bill for that power will be borne by AI users, not its creators and providers. “Yeah, [AWS] is paying a billion dollars a year in electrical bills, but their customers are paying them $2 billion a year. That’s how commerce works,” Gold said.


Fake network traffic is on the rise — here’s how to counter it

“Attempting to homogenize the bot world and the potential threat it poses is a dangerous prospect. The fact is, it is not that simple, and cyber professionals must understand the issue in the context of their own goals...” ... “Cyber professionals need to understand the bot ecosystem and the resulting threats in order to protect their organizations from direct network exploitation, indirect threat to the product through algorithm manipulation, and a poor user experience, and the threat of users being targeted on their platform,” Cooke says. “As well as [understanding] direct security threats from malicious actors, cyber professionals need to understand the impact on day-to-day issues like advertising and network management from bot profiles as a whole,” she adds. “So cyber professionals must ensure that the problem is tackled holistically, protecting their networks, data and their users from this increasingly sophisticated threat. Measures to detect and prevent malicious bot activity must be built into new releases, and cyber professionals should act as educational evangelists for users to help them help themselves with a strong awareness of the trademarks of fake traffic and malicious profiles.” 


Researchers reveal flaws in AI agent benchmarking

Since calling the models underlying most AI agents repeatedly can increase accuracy, researchers can be tempted to build extremely expensive agents so they can claim top spot in accuracy. But the paper described three simple baseline agents developed by the authors that outperform many of the complex architectures at much lower cost. ... Two factors determine the total cost of running an agent: the one-time costs involved in optimizing the agent for a task, and the variable costs incurred each time it is run. ... Researchers and those who develop models have different benchmarking needs to those downstream developers who are choosing an AI to use their applications. Model developers and researchers don’t usually consider cost during their evaluations, while for downstream developers, cost is a key factor. “There are several hurdles to cost evaluation,” the paper noted. “Different providers can charge different amounts for the same model, the cost of an API call might change overnight, and cost might vary based on model developer decisions, such as whether bulk API calls are charged differently.”


10 ways to prevent shadow AI disaster

Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job. ... shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says. Shadow AI could introduce legal issues, too. ... “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. 



Quote for the day:

“In matters of principle, stand like a rock; in matters of taste, swim with the current. ” -- Thomas Jefferson

Daily Tech Digest - April 09, 2022

Essentials of Enterprise Architecture Tool

EA tools allow organizations to map out their business process architecture, business capability architecture, application architecture, data architecture, integration architecture, and technology architecture. The common capabilities of EA Tool are, EA Repository supports business, information, technology, and solution viewpoints and their relationships and supports business direction, vision, strategy, etc EA Modelling, support the minimum viewpoints of business, information, solutions, and technology. Modeling of As-Is and Target state, Impact Analysis, and Roadmaps Decision Analysis, capabilities such as gap analysis, traceability, impact analysis, scenario planning, and system thinking. Multiple Views support multiple views for different types of audiences/users such as Executives, Architects/Designers, Business Planners, Suppliers, etc. Support customization and extensions of meta-model, diagrams, menus, matrices, and reports Collaboration and Sharing, provide good collaboration-oriented features, which include simultaneous model editing, a shared remote repository, version management including model comparison and merge, easy publishing, and review capabilities


Could Blockchain Be Sustainability’s Missing Link?

Environmental sustainability is only one use case for blockchain technology. Companies can use distributed ledgers for social sustainability and governance. For example, pharmaceutical companies can collect data on a blockchain that identifies and traces prescription drugs. This data collection can prevent consumers from falling prey to counterfeit, stolen, or harmful products. Banks can collateralize physical assets, such as land titles, on a blockchain to keep an unalterable record and protect consumers from fraud. In supply chain finance, organizations can use distributed ledger technology to match the downstream flow of goods with the upstream flow of payments and information. That can help level the playing field for smaller financial institutions. Sustainability must be seamless. ServiceNow recently partnered with Hedera to help organizations easily adopt digital ledger technology on the Now Platform. This partnership provides a seamless connection between trusted workflows across organizations.


Supply chain woes? Analytics may be the answer

Enterprises face multiple risks throughout their supply chains, Deloitte says, including shortened product life cycles and rapidly changing consumer preferences; increasing volatility and availability of resources; heightened regulatory enforcement and noncompliance penalties; and shifting economic landscapes with significant supplier consolidation. ... “Often people think of the supply chain as one thing and it is not,” Korba says. “We think of the supply chain as the sum of several parts of the whole business operation — from understanding customer demand to materials management and manufacturing or sourcing and purchasing, to logistics and transportation, to inventory management and automated replenishment orders at Optimas and at our customers’ locations.” A key to success is the ability for all the supply chain tools the company uses to work together seamlessly, to help keep customers appropriately stocked and better manage costs, demand, inventory, production, and suppliers. The information provided through analytics needs to address financial issues such as cashflow and pricing on the supply and demand sides.


Cloud 2.0: Serverless architecture and the next wave of enterprise offerings

Serverless architecture brings two benefits. First, it enables a pay-as-you-go model on the full stack of technology and on the most granular basis possible, thereby reducing the overall run cost. The pay-as-you-go model is activated by putting functions into production via the operator of the serverless ecosystem only when they are needed. Therefore, serverless architecture not only reduces costs below the economies of scale provided by cloud-based setups capable of operating infrastructure at large scale, but also reduces idle capacity. Second, serverless architecture provides ecosystem access for the underlying infrastructure as well as the entire functionality, thereby drastically reducing the cost to transform the company’s IT environment. Ecosystem access for functions is achieved through the provider’s FaaS and BaaS models instead of being redeveloped for every client. While ecosystem access in SaaS was only possible for the entire software package, with serverless architecture even small-scale functions can be reused, thereby offering more flexibility and reusability on a broad basis.


Meta wants to turn real life into a free-to-play

Companies adopting the free-to-play monetization techniques in their titles naturally have an incentive to max out the users’ shopping sprees. To this end, they can deploy a whole array of design decisions, from annoying pop-ups with links to in-game shops to more sophisticated tools. The latter use behavioral data and psychological tricks to goad the users into spending more. Some of the latest patents coming from leading industry names, such as Activision, put machine learning at the service of the company’s bottom line. Tweaking the matchmaking system to prompt new players to spend more? Check. Clustering players in groups to target them with tailored messaging, offerings, and prices? Check. These and other techniques live and breathe behavioral data. As such, they do raise red flags in terms of data exploitation, especially if you consider who tends to fall for them the hardest. Free-to-play games make a solid chunk of their revenues off a very small subset of their player base, the so-called “whales,” as high-paying players are known in the industry.


Managing Complex Dependencies with Distributed Architecture at eBay

The eBay engineering team recently outlined how they came up with a scalable release system. The release solution leverages distributed architecture to release more than 3,000 dependent libraries in about two hours. The team is using Jenkins to perform the release in combination with Groovy scripts. As we learnt from Randy Shoup (VP of engineering and chief architect at eBay) and Mark Weinberg (VP, core product engineering at eBay) had systemic challenges with releasing major dependencies, leading to the equivalent of distributed monoliths. Late last year, eBay began migrating their legacy libraries to a Mavenized source code. The engineering team needed to consider the complicated dependency relationships between the libraries before the release. The prerequisite of one library release is that all the dependencies of it must have been released already, but considering the large number of candidate libraries and the complicated dependency relationships in each other, it will cause a considerable impact on release performance if the libraries release sequence cannot be orchestrated well.


Mark Zuckerberg’s vision for the metaverse is off to an abysmal start

While Meta’s promotional vision for metaverse worlds is a series of distinct snapshots, other metaverse platforms, such as Decentraland, The Sandbox, and Cryptovoxels, feature some level of urban planning. Like in many real-world cities, they use a grid system with plots of land distributed on a horizontal plane. This allows for property to be easily parceled and sold. However, many of these plots have remained empty, demonstrating that they are primarily traded speculatively. In some instances, content—buildings and things to do, see, and buy within them—has been added to plots of land, in an effort to create value. Virtual property developer the Metaverse Group is leasing Decentraland parcels and offering in-house architectural services to tenants. Its parent company, Tokens.com, has virtual headquarters there too, a blocky sci-fi-style tower in an area called Crypto Valley. ... Real cities are now choosing to emulate themselves in the metaverse. South Korea’s Metaverse 120 Centre will provide both recreational and administrative public services. 


SARB notes benefits, risks in using distributed ledger technology

One of the primary risks stems from the lack of regulatory certainty as the existing legal and regulatory frameworks for financial markets were not designed for trading, clearing or settling on DLT, he added. Innovation should be done in a way that the financial system is taken forward to benefit society as a whole, including contributing to achieving objectives such as improving efficiency, lowering barriers to entry for financial activity and addressing any challenges restricting access to meaningful financial services. ... “PK2 has demonstrated that building a platform for a tokenised security would impact on the existing participants in the financial market ecosystem, as several functions currently being performed by separately licensed market infrastructures could be carried out on a single shared platform. ... Further, the report, produced in partnership with the Intergovernmental Fintech Working Group and financial industry participants, highlights several legal, regulatory and policy implications that need to be carefully considered in the application of DLT to financial markets.


Why There is No Digital Future Without Blockchain

In web3, new storage solutions allow people to store data for each other in a secure and decentralized way. This makes it much, much, more difficult to obtain user data through hacking a server full of data. At the same time, the way data will be managed on the user-side is that it will be completely permission-based. Users will be able to manage data access on the fly, giving and withdrawing permission to personal data when needed. In our vision, this will end up being the way the internet is going to work in the future, whether you apply for a loan or do an online personality test. ... The power of blockchain here lies in the power of digital sovereignty, in other words, the freedom to do whatever you want online without anybody telling you otherwise. Here again, the decentralized nature of blockchain is key, because it makes it virtually impossible for any third party to interfere with the process. ... The idea is that the decentralized nature of blockchain allows people to transact wealth freely, without the need for banks, governments, or anybody else. This once sounded like a futuristic libertarian utopia, now it’s becoming a reality.


How to Measure Agile Maturity

Delivering successful products is essential and goes hand in hand with knowing how good we are at creating the product: our performance. I suggest resisting the urge to measure our performance as a cost. There are many useful metrics available such as speed, quality, predictability, etc that monitor our performance. A word of caution is needed to decide which metrics are valuable and which are not. For example, Velocity is not suitable to compare team performance. Although it can be a valuable metric at a team level, intended for the team to monitor its own speed. However, velocity does not add up to give you a number on your organisational speed. Some suggestions for useful metrics: cycle time, release frequency, product index, innovation rate, etc. ... Measuring how well we perform in delivering value to the customer also serves as a metric for organisational change. How? If it takes multiple sprints and 16 hand-offs to ship an integrated product, we can monitor how we are doing in trying to deliver that integrated product without hand-offs in a single sprint. If the number of handoffs of a team goes down, their ability to deliver Done goes up, which is a metric of organisational improvement.



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis