Daily Tech Digest - December 31, 2022

Credentials Are the Best Chance To Catch the Adversary

It used to be that attackers would batter the networks of their targets. Now, they may use LinkedIn and social media to identify your employees’ personal email accounts, hack them, and look for other credentials. External actors may also identify unhappy employees posting negative reviews on Glassdoor and offer to buy their credentials. Or these actors may just boldly call your employees out of the blue and offer to pay them for their login information and ongoing approval of multi-factor authentication (MFA) prompts. As a result, MFA is no longer a reliable tool in preventing attacks, as it can be easily gamed by malicious insiders. ... Not every attack uses stolen credentials to gain initial access to networks, but every attack eventually involves credentials. After gaining access to networks, bad actors see who has privileged access. ... Between nation-state actors, criminal gangs, computer-savvy teenagers and disgruntled insiders, the likelihood is that your network has already been penetrated. What you need now is to detect these attacks at speed to minimize their damage.


Artificial Intelligence Without The Right Data Is Just... Artificial

Successful AI “requires data diversity,’ says IDC analyst Ritu Jyoti in a report from earlier in 2022. “Similarly, the full transformative impact of AI can be realized by using a wide range of data types. Adding layers of data can improve accuracy of models and the eventual impact of applications. For example, a consumer's basic demographic data provides a rough sketch of that person. If you add more context such as marital status, education, employment, income, and preferences like music and food choices, a more complete picture starts to form. With additional insights from recent purchases, current location, and other life events, the portrait really comes to life.” To enable AI to scale and proliferate across the enterprise, “stakeholders must ensure a solid data foundation that enables the full cycle of data management, embrace advanced analytical methods to realize the untapped value of data,” says Shub Bhowmick, co-founder and CEO of Tredence. “In terms of data availability and access, businesses need a way to parse through huge tracts of data and surface what’s relevant for a particular application,” says Sachdev.


Web3, the Metaverse and Crypto: Trends to Expect in 2023 and Beyond

If something good can come from FTX, it is that more regulations are coming, especially for centralized crypto exchanges, along with stricter rules on investor protection in the crypto trading space. Even Congress is paying attention, having summoned SBF for a congressional hearing (he was arrested the day before the scheduled hearing). These regulations are overdue – I have advocated for regulating centralized crypto exchanges since 2017. However, it’s better late than never. Legislators and regulators world-wide have zeroed in on the crypto market with an attempt to lay out rules, which hopefully prevents future catastrophes such as FTX. But legislators and regulators must be cautious in their approach, making sure not to stifle Web3 innovation. If they understand the difference between cryptocurrency as an asset class that trades on a centralized trading platform, and innovation that utilizes Web3 technology, and stick to investor protection while creating a welcoming environment for the development of Web3 applications, then we might be expecting a favorable legislative environment both for investors and developers.


Microservices Integration Done Right Using Contract-Driven Development

When all the code is part of a monolith, the API specification for a service boundary may just be a method signature. Also, these method signatures can be enforced through mechanisms such as compile time checks, thereby giving early feedback to developers. However, when a service boundary is lifted to an interface such as http REST API by splitting the components into microservices, this early feedback is lost. The API specification, which was earlier documented as an unambiguous method signature, now needs to be documented explicitly to convey the right way of invoking it. This can lead to a lot of confusion and communication gaps between teams if the API documentation is not machine parsable. ... Adopting an API specification standard such as OpenAPI or AsyncAPI is critical to bring back the ability to communicate API signatures in an unambiguous and machine-readable manner. While this adds to developers’ workload to create and maintain these specs, the benefits outweigh the effort.


The Threat of Predictive Policing to Data Privacy and Personal Liberty

It's not just related to law enforcement targeting; it's also related to any legal decisions. Custody decisions, civil suit outcomes, insurance decisions, and even hiring decisions can all be influenced by the RELX-owned LexisNexis system, which gathers and aggregates data. Unfortunately, there's little recourse for someone who was unfairly treated due to a data-based risk assessment because people are rarely privy to the way these decisions are made. So, a corporate HR manager or Family Court judge could be operating off bad or incomplete data when making decisions that could effectively change lives. RELX and Thomson Reuters have disclaimers freeing them from liability for inaccurate data, which means your information could be mixed in with someone else's, causing serious repercussions in the wrong circumstances. In 2016, a man named David Alan Smith successfully sued LexisNexis Screening Solutions when the company provided his prospective employer with an inaccurate background check. 


10 digital twin trends for 2023

Over the last year, the world has been wowed by how easy it is to use ChatGPT to write text and Stable Diffusion to create images. ... Over the next year, we can expect more progress in connecting generative AI techniques with digital twin models for describing not only the shape of things but how they work. Yashar Behzadi, CEO and founder of Synthesis AI, a synthetic data tools provider, said, “This emerging capability will change the way games are built, visual effects are produced and immersive 3D environments are developed. For commercial usage, democratizing this technology will create opportunities for digital twins and simulations to train complex computer vision systems, such as those found in autonomous vehicles.” ... Hybrid digital twins make it easier for CIOs to understand the future of a given asset or system. They will enable companies to merge asset data collected by IoT sensors with physics data to optimize system design, predictive maintenance and industrial asset management. Banerjee foresees more and more industries adopting this approach with disruptive business results in the coming years.


Change Management is Essential for Successful Digital Transformation

Vasantraj notes, “Organizational culture is vital in fostering leadership and enabling enterprises to adapt. Successful teams are built on trust and the ability to put aside self-interest and work together. Teams must think of organizations as a single entity and keep a growth mindset.” This type of collaborative culture doesn’t emerge without a lot of effort. Amy Ericson, a Senior Vice President at PPG, suggests one way a great change management leader can make their efforts employee-centric is to lead with empathy. She makes three helpful recommendations, “First, ask how your people are. Really ask them. Then, listen. You may find that they’re struggling, and your interest in how they are doing and genuine concern will help them move forward productively. Second, acknowledge their situation and ask how you can help. Do they need access to new tools or resources? Do they need a different schedule? Third, thank them, and follow through. Praise their courage to be honest, and deliver on your promises to help them succeed.”[5] Beyond being an empathetic leader, the BCG team highly recommends getting employees involved from the beginning of the change process.
.

‘There’s a career in cybersecurity for everyone,’ Microsoft Security CVP says

When there’s an abundance of opportunities, there are many ways of getting into that opportunity. We do have an incredible talent shortage. Going back to a myth buster, 37% of the people that we surveyed said that they thought a college degree was necessary to be in security. It’s not true. You don’t need a college degree. Many security jobs don’t require a four-year college degree. You can qualify by getting a certificate, an associate degree from a community college. Hence, why we are working with community colleges. There’s also a lot of resources for free because it can be daunting. The cost itself can be daunting, but there’s a lot of resources. Microsoft has a massive content repository that we have made available. We have made certifications. These are available to anyone who wants to take them, and there are ways you can train yourself and get into cybersecurity. We have this abundance of opportunity, which creates new ways of getting in, and we need to educate people about all these facets about how they can get in.


How the Rise of Machine Identities Impacts Enterprise Security Strategies

First, security leaders must rethink their traditional identity and access management (IAM) strategies. Historically, IAM has focused on human identities authenticating access systems, software and apps on a business network. However, with the rise of containers, APIs and other technology, a secure IAM approach must utilize cryptographic certificates, keys and other digital secrets that protect connected systems and support an organization’s underlying IT infrastructure. With the shift to the cloud, a Zero Trust framework has become the new security standard, where all users, machines, APIs and services must be authenticated and authorized before being able to access apps and data. In the cloud, there is no longer a traditional security perimeter around the data center, so the service identity is the new perimeter. When handling machine identities, fine-grained consent controls are essential in protecting privacy as data is moved between machines. The authorization system discerns the “who, what, where, when, and why” and confirms that the owner has consented to the sharing of that data and the person requesting access isn’t a fraudster. 


3 Predictions For Fintech Companies’ Evolution In 2023

If you spend even five minutes on LinkedIn, you know the debate between in-person, hybrid and distributed work is still a hot one. But what does the data tell us? Owl Lab’s State of Remote Work Report found the number of workers choosing to work remotely in 2022 increased 24%, those choosing hybrid went up 16% and interest for in-office work dropped by 24%. The data keeps rolling in with this McKinsey study that found, when offered, almost everyone takes the opportunity to work flexibly. Companies looking to embrace this flexible work mindset should focus on improving and optimizing synchronous activities like all-hands meetings, lunch and learns, and coffee chats. Supporting asynchronous work is also important. Personally, I’m a champion of written and narrative documentation of projects, which allows people to review and process on their own time and at their own pace. In my experience, this makes meetings even more productive and impactful so people can focus on the outcomes of time spent together. No one has a crystal ball for what the next year holds.  



Quote for the day:

"Leadership matters more in times of uncertainty." -- Wayde Goodall

Daily Tech Digest - December 29, 2022

10 IT certifications paying the highest premiums today

The Certified in the Governance of Enterprise IT (CGEIT) certification is offered by the ISACA to validate your ability to handle “the governance of an entire organization” and can also help prepare you for moving to a C-suite role if you aren’t already in an executive leadership position. The exam covers general knowledge of governance of enterprise IT, IT resources, benefits realization, and risk optimization. To qualify for the exam, you’ll need at least five years of experience in an advisory or oversight role supporting the governance of IT in the enterprise. ... The AWS Certified Security certification is a specialty certification from Amazon that validates your expertise and ability with securing data and workloads in the AWS cloud. The exam is intended for those working in security roles with at least two years of hands-on experience securing AWS workloads. It’s recommended that candidates for the exam have at least five years of IT security experience designing and implementing security solutions. ... To earn the certification, you will need to pass the AWS Certified Security Specialty exam, which consists of multiple choice and multiple response questions.


When will cloud computing stop growing?

So, no matter where the market goes, and even if the hyperscalers begin to seem more like legacy technology, the dependencies will remain and growth will continue. The hyperscaler market could become more complex and fragmented, but public clouds are the engines that drive growth and innovation. Will it stop growing at some point? I think there are two concepts to consider: First, cloud computing as a concept. Second, the utility of the technology itself. Cloud computing is becoming so ubiquitous, it will likely just become computing. If we use mostly cloud-based consumption models, the term loses meaning and is just baked in. I actually called for this in a book I wrote back in 2009. Others have called for this as well, but it’s yet to happen. When it does, my guess is that the cloud computing concept will stop growing, but the technology will continue to provide value. The death of a buzzword. The utility, which is the most important part, carries on. Cloud computing, at the end of the day, is a much better way to consume technology services. The idea of always owning our own hardware and software, running our own data centers, was never a good one.


Modernise and Bolster Your Data Management Practice with Data Fabric

Data has emerged as an invaluable asset that can not only be used to power businesses but can also be put to the wrong use for individual benefit. With stringent regulatory norms around data handling and management in place, data security, governance and compliance need dedicated attention. Data fabric can significantly improve security by integrating together data and applications from across physical and IT systems. It enables a unified and centralized route to create policies and rules. The ability to automatically link policies and rules basis metadata such as data classifications, business terms, user groups, roles, and more, including policies on data access controls, data privacy, data protection, and data quality ensures optimized data governance, security, and compliance. Changing business dynamics require businesses to be ahead of the curve by virtue of aptly and actively using data. Data fabric is a data operational layer that weaves through huge volumes of data from multiple sources and processes it using machine learning enabling businesses to discover patterns and insights in real-time. 


It’s a Toolchain!

Even ‘one’ toolchain is really not the same chain of tools; it is the same CI/CD tool managing a pool of others. This has really interesting connotations for the idea of the “weakest link in the chain,” whether we’re talking security, compliance or testing, because the weakest link might depend on which tools are spawned this run. To take an easy example that doesn’t overlap with the biggest reason above—targeting containers for test and virtual machines (VMs) for deployment. Some organizations do this type of thing regularly due to licensing or space issues. Two different deployment steps in ‘one’ toolchain. There are more instances like this than you would think. “This project uses make, that one uses cmake” is an example of the type of scenarios we’re talking about. These minor variations are handled by what gets called from CI. Finally, most of the real-life organizations I stay in touch with are both project-based and are constantly evolving. That makes both of the above scenarios the norms, not the exceptions. While they would love to have one stack and one toolchain for all projects, no one realistically sees that happening anytime soon. 


How DevOps is evolving into platform engineering

Platform engineering is the next big thing in the DevOps world. It has been around for a few years. Now the industry is shifting toward it, with more companies hiring platform engineers or cloud platform engineers. Platform engineering opens the door for self-service capabilities through more automated infrastructure operations. With DevOps, developers are supposed to follow the "you build it, you run it" approach. However, this rarely happens, partly because of the vast number of complex automation tools. Since more and more software development tools are available, platform engineering is emerging to streamline developers' lives by providing and standardizing reusable tools and capabilities as an abstraction to the complex infrastructure. Platform engineers focus on internal products for developers. Software developers are their customers, and platform engineers build and run a platform for developers. Platform engineering also treats internal platforms as a product with a heavy focus on user feedback. Platform teams and the internal development platform scale out the benefits of DevOps practices. 


Top 5 Cybersecurity Trends to Keep an Eye on in 2023

Cyber security must evolve to meet these new demands as the world continues shifting towards remote and hybrid working models. With increased reliance on technology and access to sensitive data, organizations need to ensure that their systems are secure and their employees are equipped to protect against cyber threats. Organizations should consider implementing security protocols such as Multi-Factor Authentication (MFA), which requires additional authentication steps to prove the user’s identity before granting access to systems or data. MFA can provide an additional layer of protection against malicious actors who may try to access accounts with stolen credentials. Businesses should also consider developing policies and procedures for securing employee devices. This could include providing employees with secure antivirus software and encrypted virtual private networks (VPNs) for remote connections. Additionally, employees should be trained on the importance of strong passwords, unique passwords for each account, and the dangers of using public networks.


Understanding Data Management, Protection, and Security Trends to Design Your 2023 Strategy

Today more than ever there is a need for a modernized approach towards data security considering that the threats are increasingly getting sophisticated. Authentication-as-a-Service with built-in SSO capabilities, tightly integrated with Cloud apps will secure online access. Data encryption solutions with comprehensive key management solutions will help customers protect their digital assets whether on-premise or cloud. EDRM solutions with the widest file and app support will aide customers to protect and have control over their data even outside their networks. DLP solutions with integrated user behavior analysis (UBA) modules provide customers leverage their investment in their DLP. Data discovery and classification help organizations get complete visibility into sensitive data with efficient data discovery, classification, and risk analysis across heterogeneous data stores. These are some approaches organizations can benefit from OEMs designing data security solutions and products.


US-China chip war puts global enterprises in the crosshairs

“In addition to the chipmakers and semiconductor manufacturers in China, every company on the supply chain of advanced chipsets, such as the electronic vehicle manufacturers and HPC [high performance computing] makers in China, will be hit," said Charlie Dai, research director at market research firm Forrester. "There will also be collateral damage to the global technology ecosystem in every area, such as the chip design, tooling, and raw materials.” Enterprises might not feel the burn right away, since interdependencies between China and the US will be hard to unwind immediately. For example, succumbing to pressure from US businesses, in early December the US Department of Defense said it would allow its contractors to use chips from the banned Chinese chipmakers until 2028. In addition, the restrictions are not likely to have a direct effect on the ability of the global chip makers to manufacture semiconductors, since they have not been investing in China to manufacture chips there, said Pareekh Jain, CEO at Pareekh Consulting.


Financial Services Was Among Most-Breached Sectors in 2022

The practice of attackers sneaking so-called digital skimmers - typically, JavaScript code - onto legitimate e-commerce or payment platforms also continues. These tactics, known as Magecart-style attacks, most often aim to steal payment card data when a customer goes to pay. Attackers either use that data themselves or batch it up into "fullz," referring to complete sets of credit card information that are sold via a number of different cybercrime forums. Innovation continues among groups that practice Magecart tactics. In recent weeks, reports application security vendor Jscrambler, three different attack groups have begun wielding new, similar tactics designed to inject malicious JavaScript into legitimate sites. One of the groups has been injecting a "Google Analytics look-alike script" into victims' pages, while another has been injecting a "malicious JavaScript initiator that is disguised as Google Tag Manager." The third group is also injecting code, but does so by having registered the domain name for Cockpit, a free web marketing and analytics service that ceased operations eight years ago. 


Microservices Integration Done Right Using Contract-Driven Development

Testing an application is not just about testing the logic within each function, class, or component. Features and capabilities are a result of these individual snippets of logic interacting with their counterparts. If a service boundary/API between two pieces of software is not properly implemented, it leads to what is popularly known as an integration issue. Example: If functionA calls functionB with only one parameter while functionB expects two mandatory parameters, there is an integration/compatibility issue between the two functions. Such quick feedback helps us course correct early and fix the problem immediately. However, when we look at such compatibility issues at the level of microservices where the service boundaries are at the http, messaging, or event level, any deviation or violation of the service boundary is not immediately identified during unit and component/api testing. The microservices must be tested with all their real counterparts to verify if there are broken interactions. This is what is broadly (and in a way wrongly) classified as integration testing.



Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux

Daily Tech Digest - December 28, 2022

The 5-step plan for better Fraud and Risk management in the payments industry

The overall complexity and size of the digital payments industry make it extremely difficult to detect fraud. In this context, merchants and payment companies can introduce fraud monitoring and anti-fraud mechanisms that verify every transaction in real-time. The AI-based systems can take into account different aspects such as suspicious transactions, for example, amount, unique bank card token, user’s digital fingerprint, the IP address of the payer, etc., to evaluate the authenticity. Today, OTPs are synonymous with two-factor authentication and are thought to augment existing passwords with an extra layer of security. Yet, fraudsters manage to circumvent it every day. With Out-of-Band Authentication solutions in combination with real-time Fraud Risk management solutions, the service provider can choose one of many multi-factor authentication options available during adaptive authentication, depending on their preference and risk profile Just like 3D Secure, this is another internationally-accepted compliance mechanism that ensures that all the intermediaries involved in the payments system must take special care of the sensitive client information. 


The Importance of Pipeline Quality Gates and How to Implement Them

There is no doubt that CI/CD pipelines have become a vital part of the modern development ecosystem that allows teams to get fast feedback on the quality of the code before it gets deployed. At least that is the idea in principle. The sad truth is that too often companies fail to fully utilize the fantastic opportunity that a CI/CD pipeline offers in being able to provide rapid test feedback and good quality control by failing to implement effective quality gates into their respective pipelines. A quality gate is an enforced measure built into your pipeline that the software needs to meet before it can proceed to the next step. This measure enforces certain rules and best practices that the code needs to adhere to prevent poor quality from creeping into the code. It can also drive the adoption of test automation, as it requires testing to be executed in an automated manner across the pipeline. This has a knock-on effect of reducing the need for manual regression testing in the development cycle driving rapid delivery across the project.


Best of 2022: Measuring Technical Debt

Of the different forms of technical debt, security and organizational debt are the ones most often overlooked and excluded in the definition. These are also the ones that often have the largest impact. It is important to recognize that security vulnerabilities that remain unmitigated are technical debt just as much as unfixed software defects. The question becomes more interesting when we look at emerging vulnerabilities or low-priority vulnerabilities. While most will agree that known, unaddressed vulnerabilities are a type of technical debt, it is questionable if a newly discovered vulnerability is also technical debt. The key here is whether the security risk needs to be addressed and, for that answer, we can look at an organization’s service level agreements (SLAs) for vulnerability management. If an organization sets an SLA that requires all high-level vulnerabilities be addressed within one day, then we can say that high vulnerabilities older than that day are debt. This is not to say that vulnerabilities that do not exceed the SLA do not need to be addressed; only that vulnerabilities within the SLA represent new work and only become debt when they have exceeded the SLA.


DevOps Trends for Developers in 2023

Security automation is the concept of automating security processes and tasks to ensure that your applications and systems remain secure and free from malicious threats. In the context of CI/CD, security automation ensures that your code is tested for vulnerabilities and other security issues before it gets deployed to production. In addition, by deploying security automation in your CI/CD pipeline, you can ensure that only code that has passed all security checks is released to the public/customers. This helps to reduce the risk of vulnerabilities and other security issues in your applications and systems. The goal of security automation in CI/CD is to create a secure pipeline that allows you to quickly and efficiently deploy code without compromising security. Since manual testing might take a lot of time and developers' time, many organizations are integrating security automation in their CI/CD pipeline today. ... Also, the introduction of AI/ML in the software development lifecycle (SDLC) is getting attention as the models are trained to detect irregularities in the code and give suggestions to enhance or rewrite it.


What Brands Get Wrong About Customer Authentication

When comparing friction for customers with security accounts and practical security needs, one of the main challenges is convincing the revenue side of a business of the need for best practice from a security standpoint. Cybersecurity teams must demonstrate that the financial risks of not putting security in place - i.e., fraud, account takeover, reputation loss, regulatory fines, lawsuits, etc. - overwhelm the loss of revenue and abandonment of transactions on the other side. There are always costs associated with security systems, but comparing the costs associated with fraud to those of implementing new security measures will justify the purchase. There is a fine balance between having effective security and operating a business. Customers quickly become frustrated by jumping through hoops to log in, and the password route is unsustainable. It’s time to look at the relationship between security and authentication and develop solutions for both. Taking authentication to the next level requires thinking outside the box. If you want to implement an authentication strategy that doesn’t drive away customers, you need to make customer experience the focal point.


Video games and robots want to teach us a surprising lesson. We just have to listen

The speedy, colorful ghosts zooming their way around the maze greeted me as I stared at the screen of a Pac-Man machine, a part of the 'Never Alone: Video Games and Other Interactive Design' exhibit of the Museum of Modern Art in New York City. Using the tiniest amount of RAM and code, each ghost is programmed with its own specific behaviors, which combine to create the masterpiece work, according to Paul Galloway, collection specialist for the Architecture and Design Department. This was the first time I'd seen video games inside a museum, and I had come to this exhibit to see if I could glean some insight into technology through the lens of art. It's an exhibit that is more timely now more than ever, as technology has been absorbed into nearly every facet of our lives both at work and at home -- and what I learnt is that our empathy with technology is leading to new kinds of relationships between ourselves and our robot friends. ... According to Galloway, the Never Alone exhibit is linked to an Iñupiaq video game included in the exhibit called Never Alone (Kisima Ingitchuna). 


The increasing impact of ransomware on operational technology

To protect against initial intrusion of networks, organisations must consistently find and remediate key vulnerabilities and known exploits, while monitoring the network for attack attempts. Also, wherever possible equipment should be kept up-to-date. VPNs in particular need close attention from cyber security personnel; new VPN keys and certificates must be created, with logging of activity over VPNs being enabled. Access to OT environments via VPNs calls for architecture reviews, multi-factor authentication (MFA) and jump hosts. In addition, users should read emails in plain text only, as opposed to rendering HTML, and disable Microsoft Office macros. For network access attempts from threat actors, organisations should perform an architecture review for routing protocols involving OT, and monitor for the use of open source tools. MFA should be implemented to access OT systems, and intelligence sources utilised for threat and communication identification and tracking.


The security risks of Robotic Process Automation and what you can do about it

RPA credentials are often shared so they can be used repeatedly. Because these accounts and credentials are left unchanged and unsecured, a cyber attacker can steal them, use them to elevate privileges, and move laterally to gain access to critical systems, applications, and data. In addition, users with administrator privileges can retrieve credentials stored in locations that are not secured. As many enterprises leveraging RPA have numerous bots in production at any given time, the potential risk is very high. Securing the privileged credentials utilised by this emerging digital workforce is an essential step in securing RPA workflows. ... The explosion in identities is putting more pressure on security teams since it leads to the creation of more vulnerabilities. The management of machine identities, in particular, poses the biggest problem, given that they can be generated quickly without consideration for security protocols. Further, while credentials used by humans often come with organisational policy that mandates regular updates, those used by robots remain unchanged and unmanaged. 


Best of 2022: Using Event-Driven Architecture With Microservices

Most existing systems live on-premises, while microservices live in private and public clouds so the ability for data to transit the often unstable and unpredictable world of wide area networks (WANs) is tricky and time-consuming. There are mismatches everywhere: updates to legacy systems are slow, but microservices need to be fast and agile. Legacy systems use old communication mediums, but microservices use modern open protocols and APIs. Legacy systems are nearly always on-premise and at best use virtualization, but microservices rely on clouds and IaaS abstraction. The case becomes clear – organizations need an event-driven architecture to link all these legacy systems versus microservices mismatches. ... Orchestration is a good description – composers create scores containing sheets of music that will be played by musicians with differing instruments. Each score and its musician are like a microservice. In a complex symphony with a hundred musicians playing a wide range of instruments – like any enterprise with complex applications – far more orchestration is required.


Scope 3 is coming: CIOs take note

Many companies in Europe have built teams to address IT sustainability and have appointed directors to lead the effort. Gülay Stelzmüllner, CIO of Allianz Technology, recently hired Rainer Karcher as head of IT sustainability. “My job is to automate the whole process as much as possible,” says Karcher, who was previously director of IT sustainability at Siemens. “This includes getting source data directly from suppliers and feeding that into data cubes and data meshes that go into the reporting system on the front end. Because it’s hard to get independent and science-based measurements from IT suppliers, we started working with external partners and startups who can make an estimate for us. So if I can’t get carbon emissions data directly from a cloud provider, I take my invoices containing consumption data, and then take the location of the data center and the kinds of equipment used. I put all that information to a rest API provided by a Berlin-based company, and using a transparent algorithm, they give me carbon emissions per service.” Internally speaking, the head of IT sustainability role has become more common in Europe—and some of the more forward-thinking US CIOs are starting to see the need in their own organizations.



Quote for the day:

"The only way to follow your path is to take the lead." -- Joe Peterson

Daily Tech Digest - December 27, 2022

Prepping for 2023: What’s Ahead for Frontend Developers

WebAssembly will work alongside JavaScript, not replace it, Gardner said. If you don’t know one of the languages used by WebAssembly — which acts as a compiler — Rust might be a good one to learn because it’s new and Gardner said it’s gaining the most traction. Another route to explore: Blending JavaScript with WebAssembly. “Rust to WebAssembly is one of the most mature paths because there’s a lot of overlap between the communities, a lot of people are interested in both Rust and WebAssembly at the same time,” he said. “Plus, it’s possible to blend WebAssembly with JavaScript so it’s not an either-or situation necessarily.” That in turn will yield new high-performing applications running on the web and mobile, Gardner added. “You’re not going to see necessarily a ‘Made with WebAssembly’ banner show up on websites, or anything along those lines, but you are going to see some very high-performing applications running on the web and then also on mobile, built off of WebAssembly,” he said. ... “Organizations are trying to automate and improve their test automation, and part of that shift to shipping faster means, you have to find ways to optimize what you’re doing,” DeSanto said.


What is FinOps? Your guide to cloud cost management

“FinOps brings financial accountability — including financial control and predictability — to the variable spend model of cloud,” says J.R. Storment, executive director of the FinOps Foundation. “This is increasingly important as cloud spending makes up ever more of IT budgets.” It also enables organizations to make informed trade-offs between speed, cost, and quality in their cloud architecture and investment decisions, Storment says. “And organizations get maximum business value by helping engineering, finance, technology, and business teams collaborate on data-driven spending decisions,” he says. Aside from bringing together the key people who can help an organization gain better control of its cloud spending, FinOps can help reduce cloud waste, which IDC estimates between 10% to 30% for organizations today. “Moving from show-back cloud accounting, where IT still pays and budgets for cloud spending, to a charge-back model, where individual departments are accountable for cloud spending in their budget, is key to accelerating savings and ensuring only necessary cloud projects are implemented,” Jensen says.


IoT Analytics: Making Sense of Big Data

The principles that guide enterprises in the way they approach IoT analytics data are: Data is an asset: Data is an asset that has a specific and measurable value for the enterprise.Data is shared: Data must be shared across the enterprise and its business units. Users have access to the data that is necessary to perform their activities; Data trustees: Each data element has trustees accountable for data quality; Common vocabulary and data definitions: Data definition is consistent, and the taxonomy is understandable throughout the enterprise; Data security: Data must be protected from unauthorised users and disclosure; Data privacy: Privacy and data protection is considered throughout the life cycle of a Big Data project. All data sharing conforms to the relevant regulatory and business requirements; and Data integrity and the transparency of processes: Each party to a Big Data analytics project must be aware of and abide by their responsibilities regarding the provision of source data and the obligation to establish and maintain adequate controls over the use of personal or other sensitive data.


Reframing our understanding of remote work

The remote and hybrid work trend is the most disruptive change in how businesses work since the introduction of the personal computer and mobile devices. Then, like now, the conversation was lost in the weeds. Should we allow PCs? Should we allow employees to bring their own devices? Should we issue pagers, feature phones, then smartphones to employees or let them use their own? In hindsight, it's clear that all these concerns were utterly pointless. The PC revolution was a tsunami of certainty that would wash away old ways of doing everything. So the only question should have been: How do we ensure these devices are empowering, secure, and usable? All focus should have been on the massive learning curve by organizations (what's the best way to deploy, update, secure, provision, purchase, and network these devices for maximum benefit) And by end users. In other words, while everyone gnashed their teeth over whether to allow devices — or what kind or level of devices to allow — the energy could have been much better spent realizing the entire issue was about skills and knowledge.


Developing Successful Data Products at Regions Bank

Misra said that there are a few especially important components involved in the success of the data product partner role and the discipline of product management for analytics and AI initiatives. One is to ensure that the partner role is strategic, proactive, and focused on critical business needs, and not simply an on-demand service within the company. All data products should address a critical business priority for partners and, when deployed, should deliver substantial incremental value to the business. The teams that work on the products should employ agile methods and include data scientists, data managers, data visualization experts, user interface designers, and platform and infrastructure developers. Misra is a fan of software engineering disciplines — systematic techniques for the analysis, design, implementation, testing, and maintenance of software programs — and believes that they should be employed in data science and data products as well. This product orientation also requires that there’s a big-picture focus, not just by the data product partners but by everyone on the product development teams. 


Amplified security trends to watch out for in 2023

Cybercriminals target employees across different industries to surreptitiously recruit them as insiders, offering them financial enticements to hand over company credentials and access to systems where sensitive information is stored. This approach isn’t new, but it is gaining popularity. A decentralized work environment makes it easier for criminals to target employees through private social channels, as the employee does not feel that they are being watched as closely as they would in a busy office setting. Aside from monitoring user behavior and threat patterns, it’s important to be aware of and be sensitive about the conditions that could make employees vulnerable to this kind of outreach – for example, the announcement of a massive corporate restructuring or a round of layoffs. Not every employee affected by a restructuring suddenly becomes a bad guy, but security leaders should work with Human Resources or People Operations and people managers to make them aware of this type of criminal scheme, so that they can take the necessary steps to offer support to employees who could be affected by such organizational or personal matters. 


What is the Best Cloud Strategy for Cost Optimization?

More often than not, some resources are underutilized. This usually stems from overbudgeting for certain processes. For instance, a cloud computing instance may be underutilized to the point that it uses less than 5% of its CPU. Note that with cloud services, you pay for the storage and computing power, rather than the space. In the instance highlighted above, it’s clear that there’s a case of significant waste. In your bid to optimize costs, it’s best to identify these idle instances and consolidate the workload into fewer cloud instances. It can be difficult to understand how much power the system uses without adequate visualization. Heat maps are highly useful in cloud cost optimization. This infographic tool highlights computing demand and consumption’s high and low points. This data can be useful in establishing stop and start times for cost reduction. Visual tools like heat maps can help you identify clogged-up sections before they become problematic. When a system load becomes one-directional, you know it’s time to adjust and balance it before it disrupts your processes.


Server supply chain undergoes shift due to geopolitical risks

Adding to the motivation to exit China and Taiwan was the saber rattling and increasingly bellicose tone from Beijing to Taiwan, along with fairly severe sanctions on semiconductor sales from the U.S. Department of Commerce. This has led some US-based cloud service providers, such as Google, AWS, Meta, and Microsoft, to look at adding server production lines outside Taiwan as a precautionary measure, according to TrendForce. There have been a number of other moves as well. In the US, Intel is spending $20 billion on an Arizona fab and another $20 billion on fabs in Ohio. TSMC is spending $40 billion on fabs in Arizona as well, and Apple is moving production to the US, Mexico, India, and Vietnam. TrendForce also noted a phenomenon it calls “fragmentation” as an emerging model in the management of the server supply chain. It used to be that server production and the assembly process were handled entirely by ODMs. In the future, the assembly task of a server project will be given to not only an ODM partner but also a system integrator.


What’s the Difference Between Kubernetes and OpenShift?

Red Hat provides automated installation and upgrades for most common public and private clouds, allowing you to update on your own schedule and without disrupting operations. This process is perhaps one of the biggest differentiations between OpenShift and the standard Kubernetes environment, as it provides a runbook for updates and uses this to avoid disruption. If you’re running a cluster of OpenShift servers, you will be able to upgrade while applications continue to run, with OpenShift’s orchestration tools moving nodes and containers as required. When it comes to managed on-premises Kubernetes OpenShift is perhaps best compared with Microsoft’s Azure Arc tooling, which brings Azure’s managed Kubernetes to on-premises, using the Azure Portal as a management tool, or VMware’s Tanzu. They are all based on certified Kubernetes, adding their own management tooling and access control. OpenShift is more a sign of Kubernetes’ importance to enterprise application development than anything else. 


CISO Budget Constraints Drive Consolidation of Security Tools

Piyush Pandey, CEO at Pathlock, a provider of unified access orchestration, says budget constraints will affect both solution purchases, but also potentially the staff required to run them. “This will likely drive the consolidation of solutions that span across multiple organizations, such as access, compliance, and security tools,” he says. “This consolidation into platforms will help organizations prioritize their resources -- time, money, and people.” He says organizations that focus on comprehensive solutions can drive more synergies across different departments to be compliant. “This won't just be about cost savings, however -- it will also help reduce the complexity of their infrastructure, eliminating multiple standalone tools and solutions,” Pandey adds. Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, explains the global financial downturn has hit multiple sectors, which means budgets are short overall. “The challenge will be keeping cybersecurity postures strong, even in the face of budget cuts,” he says. 



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - December 26, 2022

Nvidia still crushing the data center market

EVGA CEO Andy Han cited several grievances with Nvidia, not the least of which was that it competes with Nvidia. Nvidia makes graphics cards and sells them to consumers under the brand name Founder’s Edition, something AMD and Intel do very little or not at all. In addition, Nvidia’s line of graphics cards was being sold for less than what licensees were selling their cards. So not only was Nvidia competing with its licensees, but it was also undercutting them. Nvidia does the same on the enterprise side, selling DGX server units (rack-mounted servers packed with eight A100 GPUs) in competition with OEM partners like HPE and Supermicro. Das defends this practice. “DGX for us has always been sort of the AI innovation vehicle where we do a lot of item testing,” he says, adding that building the DGX servers gives Nvidia the chance to shake out the bugs in the system, knowledge it passes on to OEMs. “Our work with DGX gives the OEMs a big head-start in getting their systems ready and out there. So it's actually an enabler for them.” But both Snell and Sag think Nvidia should not be competing against its partners. “I'm highly skeptical of that strategy,” Snell says. 


A Look Ahead: Cybersecurity Trends to Watch in 2023

Multifactor authentication was once considered the gold standard of identity management, providing a crucial backstop for passwords. All that changed this year with a series of highly successful attacks using MFA bypass and MFA fatigue tactics, combined with tried-and-true phishing and social engineering. That success won’t go unnoticed. Attackers will almost certainly increase multifactor authentication exploits. "Headline news attracts the next wave of also-rans and other bad actors that want to jump on the newest methods to exploit an attack," Bird says. "We're going to see a lot of situations where MFA strong authentication is exploited and bypassed, but it's just unfortunately a reminder to us all that tech is only a certain percentage of the solution." Ransomware attacks have proliferated across public and private sectors, and tactics to pressure victims into paying ransoms have expanded to double and even triple extortion. Because of the reluctance of many victims to report the crime, no one really knows whether things are getting better or worse. 


Why zero knowledge matters

In a sense, zero knowledge proofs are a natural elaboration on trends in complexity theory and cryptography. Much of modern cryptography (of the asymmetric kind) is dependent on complexity theory because asymmetric security relies on using functions that are feasible in one form but not in another. It follows that the great barrier to understanding ZKP is the math. Fortunately, it is possible to understand conceptually how zero knowledge proofs work without necessarily knowing what a quadratic residue is. For those of us who do care, a quadratic residue of y, for a value z is: . This rather esoteric concept was used in one of the original zero knowledge papers. Much of cryptography is built on exploring the fringes of math (especially factorization and modulus) for useful properties. Encapsulating ZKP's complex mathematical computations in libraries that are easy to use will be key to widespread adoption. We can do a myriad of interesting things with such one-way functions. In particular, we can establish shared secrets on open networks, a capability that modern secure communications are built upon


Rust Microservices in Server-side WebAssembly

Rust enables developers to write correct and memory-safe programs that are as fast and as small as C programs. It is ideally suited for infrastructure software, including server-side applications, that require high reliability and performance. However, for server-side applications, Rust also presents some challenges. Rust programs are compiled into native machine code, which is not portable and is unsafe in multi-tenancy cloud environments. We also lack tools to manage and orchestrate native applications in the cloud. Hence, server-side Rust applications commonly run inside VMs or Linux containers, which bring significant memory and CPU overhead. This diminishes Rust’s advantages in efficiency and makes it hard to deploy services in resource-constrained environments, such as edge data centers and edge clouds. The solution to this problem is WebAssembly (WASM). Started as a secure runtime inside web browsers, Wasm programs can be securely isolated in their own sandbox. With a new generation of Wasm runtimes, such as the Cloud Native Computing Foundation’s WasmEdge Runtime, you can now run Wasm applications on the server. 


How to automate data migration testing

Testing with plenty of time before the official cutover deadline is usually the bulk of the hard work involved in data migration. The testing might be brief or extended, but it should be thoroughly conducted and confirmed before the process is moved forward into the “live” phase. An automated data migration approach is a key element here. You want this process to work seamlessly while also operating in the background with minimal human intervention. This is why I favor continuous or frequent replication to keep things in sync. One common strategy is to run automated data synchronizations in the background via a scheduler or cron job, which only syncs new data. Each time the process runs, the amount of information transferred will become less and less. ... Identify the automatic techniques and principles that will ensure the data migration runs on its own. These should be applied across the board, regardless of the data sources and/or criticality, for consistency and simplicity’s sake. Monitoring and alerts that notify your team of data migration progress are key elements to consider now. 


Clean Code: Writing maintainable, readable and testable code

Clean code makes it easier for developers to understand, modify, and maintain a software system. When code is clean, it is easier to find and fix bugs, and it is less likely to break when changes are made. One of the key principles of clean code is readability, which means that code should be easy to understand, even for someone who is not familiar with the system. To achieve this, developers should e.g. use meaningful names for variables, functions, and classes. Another important principle of clean code is simplicity, which means that code should be as simple as possible, without unnecessary complexity. To achieve this, developers should avoid using complex data structures or algorithms unless they are necessary, and should avoid adding unnecessary features or functionality. In addition to readability and simplicity, clean code should also be maintainable, which means that it should be easy to modify and update the code without breaking it. To achieve this, developers should write modular code that is organized into small, focused functions, and should avoid duplication of code. Finally, clean code should be well-documented. 


Artificial intelligence predictions 2023

Synthetic data – data artificially generated by a computer simulation – will grow exponentially in 2023, says Steve Harris, CEO of Mindtech. “Big companies that have already adopted synthetic data will continue to expand and invest as they know it is the future,” says Harris. Harris gives the example of car crash testing in the automotive industry. It would be unfeasible to keep rehearsing the same car crash again and again using crash test dummies. But with synthetic data, you can do just that. The virtual world is not limited in the same way, which has led to heavy adoptoin of synthetic data for AI road safety testing. Harris says synthetic data is now being used in industries he never expcted in order to improve development, services and innnovation. ... Banks will use AI more heavily to give them a competitive advantage to analyse the capital markets and spot opportunities. “2023 is going to be the year the rubber meets the road for AI in capital markets, says Matthew Hodgson, founder and CEO of Mosaic Smart Data. “Amidst the backdrop of volatility and economic uncertainty across the globe, the most precious resource for a bank is its transaction records – and within this is its guide to where opportunity resides. 


Group Coaching - Extending Growth Opportunity Beyond Individual Coaching

First, as a coach since our focus is on the relationship and interactions between the individuals, we don’t coach individuals in separate sessions. Instead, we bring them together as the group/team that they are part of and coach the entire group. Anything said by one member of the team is heard by everyone right there and then. The second building block is holding the mirror to the intangible entity mentioned above. To be accurate, holding the mirror is not a new skill for proponents of individual coaching, but it takes a significantly different approach in group coaching and has a more pronounced impact here. Holding the mirror here means picking up the intangibles and making the implicit explicit, for example, sensing the mood in the room, or reading the body language, drop/increase in energy, head nods, smiles, drop in shoulders, emotions etc. and playing back to the room your observation (sans judgement obviously). Making the intangibles explicit is an important step in group coaching - name it to tame it, if you will. The third building block is the believing and trusting in the group system that it is intelligent and self-healing.


Hybrid cloud in 2023: 5 predictions from IT leaders

Hood says this trend is fundamentally about operators accelerating their 5G network deployments while simultaneously delivering innovative edge services to their enterprise customers, especially in key verticals like retail, manufacturing, and energy. He also expects growing use of AL/ML at the edge to help optimize telco networks and hybrid edge clouds. “Many operators have been consuming services from multiple hyperscalers while building out their on-premise deployment to support their different lines of business,” Hood says. “The ability to securely distribute applications with access to data acceleration and AI/ML GPU resources while meeting data sovereignty regulations is opening up a new era in building application clouds independent of the underlying network infrastructure.” ... “Given a background of low margins, limited budgets, and the complexity of IT systems required to keep their businesses operating, many retailers now understandably rely on a hybrid cloud approach to help reduce costs whilst delivering value to their customers,” says Ian Boyle, Red Hat chief architect for retail.


Looking ahead to the network technologies of 2023

The growth in Internet dependence is really what’s been driving the cloud, because high-quality, interactive, user interfaces are critical, and the cloud’s technology is far better for those things, not to mention easier to employ than changing a data center application would be. A lot of cloud interactivity, though, adds to latency and further validates the need for improvement in Internet latency. Interactivity and latency sensitivity tend to drive two cloud impacts that then become network impacts. The first is that as you move interactive components to the cloud via the Internet, you’re creating a new network in and to the cloud that’s paralleling traditional MPLS VPNs. The second is that you’re encouraging cloud hosting to move closer to the edge to reduce application latency. ... What about security? The Internet and cloud combination changes that too. You can’t rely on fixed security devices inside the cloud, so more and more applications will use cloud-hosted instances of security tools. Today, only about 7% of security is handled that way, but that will triple by the end of 2023 as SASE, SSE, and other cloud-hosted security elements explode. 



Quote for the day:

"Leadership is unlocking people's potential to become better." -- Bill Bradley

Daily Tech Digest - December 25, 2022

How Value Stream Management is Fueling Digital Transformation

One of the world’s largest aerospace companies, The Boeing Company has been employing VSM for several years now. Through VSM, they optimized resource utilization and reduced waste. “We always thought we were doing a good job of producing value until we started to work through this,” explained Lynda Van Vleet, Boeing’s portfolio management systems product manager. “In our first two years, we saved hundreds of millions of dollars. But that wasn’t our goal. I think a lot of organizations look at this as a way of saving money because you usually do, but if you start out looking at it as a way of creating value, that just comes along with it.” The organization changed legacy approaches to product management and project investment. This enabled them to speed up their ability to innovate and pursue digital transformation. ... By establishing cross-team visibility, leaders were able to spot redundancies. For example, they saw how different IT organizations had their own analytics teams. “We had people in every organization doing the same thing,” explained Van Vleet. Boeing’s executives established a single analytics team to realign the work more efficiently and improve consistency.


Rethinking Risk After the FTX Debacle

The threat surface for FTX clients wasn't just about protecting their FTX passwords or hoping the exchange wouldn't get hacked like the Mt. Gox bitcoin exchange and so many others did. Instead, their portfolios were at risk of implosions over assets and investments they had never heard of. That is the definition of risk: having your hard-earned money and investments merged with a toxic mix of super-risky sludge. That’s a helpless place to be. After more than 20 years in cybersecurity, it is difficult not to think about risk exposure and threat management in a case like this. Security teams are dealing with something much more akin to SBF than Madoff. There is no singular threat facing an enterprise today. Instead, it is a constellation of assets, devices, data, clouds, applications, vulnerabilities, attacks, and defenses. Security teams' biggest weakness is that they are being asked to secure what they can neither see nor control. Where is our critical data? Who is accessing it, and who needs access? Every day in cybersecurity, the landscape of what needs to be protected changes. Applications are updated. Data is stored or in transit among multiple clouds. Users change. Every day represents new challenges.


Quantum Machine Learning: A Beginner’s Guide

Welcome to the world of quantum machine learning! In this tutorial, we will walk you through a beginner-level project using a sample dataset and provide step-by-step directions with code. By the end of this tutorial, you will have a solid understanding of how to use quantum computers to perform machine learning tasks and will have built your first quantum model. But before we dive into the tutorial, let’s take a moment to understand what quantum machine learning is and why it is so exciting. Quantum machine learning is a field at the intersection of quantum computing and machine learning. It involves using quantum computers to perform machine learning tasks, such as classification, regression, and clustering. Quantum computers are powerful machines that use quantum bits (qubits) instead of classical bits to store and process information. This allows them to perform certain tasks much faster than classical computers, making them particularly well-suited for machine learning tasks that involve large amounts of data.


Importance of anti-money laundering regulations among prosumers for a cybersecure decentralized finance

To the extent of our knowledge, this is the first study to assess this possibility with supportive evidence from a game theoretical perspective. In addition, our study examines and sheds light on the importance of AML regulations among prosumers in fulfilling the institutional role of preventing cyberattacks by the decentralized governance in a blockchain-based sharing economy. This paper focuses on prosumers as they undertake institutional roles in blockchain-based sharing economy models (Tan & Salo, 2021). In fact, most hackers are prosumers and may serve as end-users as well as developers. Therefore, their impact can be significant in setting the tone for safety and security of a blockchain-based sharing economy. Last but not least, our paper provides policy suggestions for creating effective cybersecurity efforts in permissionless DeFi without relinquishing its decentralized nature. Our first policy suggestion is the integration of artificial intelligence (AI) employing machine learning (ML) techniques to promptly flag, track, and recover stolen tokens from offenders.


Conscious Machines May Never Be Possible

Pondering this question, it’s important to recognize that intelligence and consciousness are not the same thing. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals likely have conscious experiences without being particularly smart, at least by our questionable human standards. If the great-granddaughter of LaMDA does reach or exceed human-level intelligence, this does not necessarily mean it is also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures. Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.


Six Ways To Pivot Hiring Strategies To Attract Cybersecurity Talent

To recruit and retain cybersecurity talent, you should change your approach with these six strategies. Learn from past hirings, whether successful or not: Not every hire will turn out as expected, but you can learn from these previous decisions. Remember, an interview is a conversation: You and the candidate have a lot to learn about each other. You could lose a good hire if interviews are tightly controlled and formal. In the “real world” of cybersecurity, communication and collaboration are critical, so that’s the type of environment you should create in the hiring process. Don’t rush to hire: Even if you are understaffed and have vacancies open for some time, you’ll lose more time and money by hiring the wrong people. Be patient in the process. Find someone who matches your culture: Someone can be a brilliant technical candidate but still be wrong for your organization. In many circumstances, culture fit means someone with soft skills and wants to grow and evolve. Keep in mind that a highly motivated individual is teachable: They can develop their soft and technical skills under you. 


DataOps as a holistic approach to data management

The DataOps approach, which takes its cue from the DevOps paradigm shift, is focused on increasing the rate at which software is developed for use with large data processing frameworks. DataOps also encourages line-of-business stakeholders to collaborate with data engineering, data science, and analytics teams in an effort to reduce silos between IT operations and software development teams. This ensures that the organization’s data may be utilized in the most adaptable and efficient manner to provide desirable results for business operations. DataOps integrates many facets of IT, such as data development, data transformation, data extraction, data quality, data governance, data access control, data center capacity planning, and system operations, because it encompasses so much of the data lifecycle. Typically, a company’s chief data scientist or chief analytics officer leads a DataOps team comprised of specialists like data engineers and analysts. Frameworks and related toolsets exist to support a DataOps approach to collaboration and greater agility, but unlike DevOps, there are no software solutions dedicated to “DataOps.”


How edge-to-cloud is driving the next stage of digital transformation

The thing about computing at the edge is that it needs to run at the speed of life. A self-driving car can't take the time to send off a query and await a response when a truck swerves in front of it. It has to have all the necessary intelligence in the vehicle to decide what action to take. While this is an extreme example, the same is true of factory processes and even retail sales. Intelligence, data analysis, and decision making must be available without a propagation delay, and therefore must live at the edge. Of course, all of this adds to the management overhead. Now you have management consoles from a large number of vendors to contend with, plus those for your services on-premises, and then all the stuff up in the cloud. This is where integration is necessary, where it becomes absolutely essential that all your IT resources – from the edge all the way up to the cloud – need to be managed from a single, coherent, manageable interface. It's not just about ease of use. It's about preventing mistakes and being able to keep track of and mitigate threats. 


Cloud to edge: NTT multicloud platform fuels digital transformation

The platform is the heart of our Multicloud as a Service offering because it provides visibility, control and governance across all clouds and for all workloads. It enhances the cloud providers’ native control planes with AI-backed insights for anomaly detection, correlation forecasting, automated operations, agile deployments and more, without limiting direct access to the cloud. These elements give organizations more comfort in consuming these services in a way that is closely aligned with their needs. ... This can be difficult for many clients to do themselves because most have managed their technology in a particular way for years and now have to make a step change into the cloud paradigm. But NTT has operated cloud platforms and delivered managed services across multiple industries and technologies for more than two decades, so we’re perfectly placed to help them make the leap. Some of the components of our platform may be familiar, but how we bring them together is unique. Our many years of operating experience have been baked into this platform to make it a true differentiator.


Top Decentralized Finance (DeFi) Trends in 2023

Governance tokens give individuals the authority to vote on blockchain project development and management-related matters. By having the power to have a say in blockchain project operations, it becomes possible to ensure the goals/interests of token holders are the same or similar. For example, a DeFi project like Compound lets users use native tokens for various farm or rent income schemes. It has its Token (COMP) that governs the Compound DeFi protocol's growth. ... It will soon be possible to watch the development of new social networks by creators and followers. New immersive fan economy fueled by social tokens in the metaverse can revolutionize digital monetization. Communities or celebrities can monetize their brand further by using social tokens. They will create bidirectional relationships between artists and customers with reciprocal benefits. Individuals, rather than organizations, become the agents of creativity in a dispersed collaborative paradigm. It is a unified and linked metaverse where tokenized NFTs may contain digital data rights while storing, tracking, and enforcing those rights.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne