Daily Tech Digest - September 18, 2022

5 ways to secure devops

Devops workflows are designed for speed and rapidly iterating with the latest requirements and performance improvements. Gate reviews are static. The tools devops teams rely on for security testing can lead to roadblocks, given their gate-driven design. Devops is a continuous process in high-performance IT teams, while stage gates slow the pace of development. Devops leaders often don’t have the time to train their developers to integrate security from the initial phases of a project. The challenge is how few developers are trained on secure coding techniques. Forrester’s latest report on improving code security from devops teams looked at the top 50 undergraduate computer science programs in the US, as ranked by US News and World Report for 2022, and found that none require secure coding or a secure application design class. CIOs and their teams are stretched thin with the many digital transformation initiatives, support for virtual teams and ongoing infrastructure support projects they have going on concurrently. CIOs and CISOs also face the challenges of keeping their organizations in regulatory compliance with more complex audit and reporting requirements. 


Designing APIs for humans: Error messages

The status code of the response should already tell you if an error happened or not, the message needs to elaborate so you can actually fix the problem. It might be tempting to have deliberately obtuse messages as a way of obscuring any details of your inner systems from the end user; however, remember who your audience is. APIs are for developers and they will want to know exactly what went wrong. It’s up to these developers to display an error message, if any, to the end user. Getting an “An error occurred” message can be acceptable if you’re the end user yourself since you’re not the one expected to debug the problem (although it’s still frustrating). As a developer there’s nothing more frustrating than something breaking and the API not having the common decency to tell you what broke. ... Letting you know what the error was is the bare minimum, but what a developer really wants to know is how to fix it. A “helpful” API wants to work with the developer by removing any barriers or obstacles to solving the problem. The message “Customer not found” gives us some clues as to what went wrong, but as API designers we know that we could be giving so much more information here.


Arm Neoverse roadmap targets enterprise infrastructure, cloud

"Compute workloads are on a relentless march higher, and becoming more complex," said Chris Bergey, senior vice president and general manager of Arm's infrastructure line of business, at a press briefing. "Machine learning and AI are taking over the future, and so infrastructure will look nothing like the past." Over the next year, Arm will work closely with its cloud and software partners to optimize cloud-native software infrastructure, frameworks and workloads. These partnerships include contributions to projects including Kubernetes and Istio, along with several CI/CD tools used for creating cloud-native software for the Arm architecture. Arm will also work to improve machine learning frameworks such as TensorFlow and a number of workloads such as big data, analytics and media processing. The company is moving into more traditional enterprise spaces now, Bergey said, noting the work it has done with VMware on its Project Monterey and providing support for Red Hat's OpenShift and SAP's HANA. "These cloud providers all use GPUs to underpin their cloud workloads, and the majority of them are using Arm," Bergey said.


How quantum physicists are looking for life on exoplanets

So, some of the biggest things in the universe are certainly quantum mechanical, including supermassive blackholes which can lose energy through a quantum phenomenon known as Hawking radiation. The second point is one often thinks quantum deals with very low temperatures. Again, to take our sun as an example—it's very hot, but that's quantum mechanical. Low temperature doesn't serve as a requirement for quantum. This example of a star and the quantumness of the fusion process and the high temperatures associated with that—I just want to broaden the view of what quantum mechanics is and how ubiquitous it is. ... It's quite amazing that we can determine what is in these planets' atmospheres—planets that would be impossible for humans to ever visit. That, and we can look for signatures of life, like, are there molecules that we associate with life floating around in these planets, at least if it's Earth-like life; then we might be able to determine with some probability that some planet way out there that no human could ever visit, harbors life. Or maybe we could discover other candidate forms of life.


How Is Platform Engineering Different from DevOps and SRE?

Over time, thought leaders came up with different metrics for organizations to gauge the success of their DevOps setup. The DevOps bible, “Accelerate,” established lead time, deployment frequency, change failure rate and mean time to recovery (MTTR) as standard metrics. Reports like the State of DevOps from Puppet and Humanitec’s DevOps benchmarking study used these metrics to compare top-performing organizations to low-performing organizations and deduce which practices contribute most to their degree of success. DevOps unlocked new levels of productivity and efficiency for some software engineering teams. But for many organizations, DevOps adoption fell short of their lofty expectations. Manuel Pais and Matthew Skelton documented these anti-patterns in their book “DevOps Topologies.” In one scenario, an organization tries to implement true DevOps and removes dedicated operations roles. Developers are now responsible for infrastructure, managing environments, monitoring, etc., in addition to their previous workload. Often senior developers bear the brunt of this shift, either by doing the work themselves or by assisting their junior colleagues.


The Cyber Security Head Game

Just as the predators of the fish below are never going to go away (which is why this fish camoflages itself and sports huge fake eyes to scare predators), cyber predators also will never go away. And the best of these cyber predators will continue to penetrate even the strongest defenses, because the exponential increase in IT system complexity, which makes it increasingly difficult to even understand the full extent of what you're defending, favors cyber attackers over cyber defenders. So we need to assume that some hackers will inevitably get inside our networks and thus we must adopt strategies of deception, similar to those employed successfully by our fish here, to lessen the harm from competent hackers, who manage to get up close and personal. We also need to create doubt in hackers’ minds, about the benefits of attacking us in the first place, in the same way that the poisonous Cane toad avoids attacks from predators who know the toad’s skin has lethal poison glands, and milk snakes, who have no poison, but discourage would-be predators by mimicking the coloration of coral snakes, who definitely do have deadly venom.


US Cyber-Defense Agency Urges Companies to Automate Threat Testing

Automated threat testing is still not very widespread, according to the official, who added that organizations sometimes don’t really follow through after deploying expensive tools on their network and instead just assume they’re doing the job. Automating security controls will make it easier to stop attackers from relying on established tactics. The top threat actors are still going back and leveraging vulnerabilities that are up to 10 years and older, warned the CISA official. CISA is making the recommendation in collaboration with the Center for Threat-Informed Defense, a 29-member nonprofit formed in 2019 that draws on MITRE’s framework. Iman Ghanizada, global head of autonomic security operations at Google Cloud, a research sponsor of the Center, said automated testing is important for creating continuous feedback loops that can steadily improve protection. “Whether you are a large company or a startup, you have to have visibility, analytics, response and continuous feedback,” he said.


Smart Cities: Mobility ecosystems for a more sustainable future

Although every city is different, leading cities are becoming smarter through their participation in large, complex, digitally enabled ecosystems. The question for many urban leaders, however, is how to engage with them effectively. Our experience in working with large transportation and communications clients yields a multilayered model and approach to guide the design and management of urban mobility systems. Given the interconnected nature of the building blocks of mobility, each layer—demand, supply, and foundational—is critical. Cities must understand and manage all the interactions and interdependencies. For example, demand for different forms of transportation is enabled via available modes of transit and supporting infrastructure. None of these would be possible without regulations, financing, insurance, and innovation. ... To achieve its vision of becoming a 45-minute city, Singapore is focusing on building its infrastructure (e.g., it is building intermodal mobility hubs to allow commuters to move seamlessly from one mode of transportation to another). The city is developing a robust innovation ecosystem, collaborating with many private-sector players. 


How to Draw and Retain Top Talent in Cyber Security

Before you introduce policies to increase diversity, you need to know who is currently applying. Gather data on applicants to establish if you need to take proactive steps to attract specific groups – you can’t make rational business decisions without data. Analyze job descriptions to eliminate bias so you aren’t deterring anyone. Review the language -- are you unconsciously drafting job advertisements and application forms with a white male in mind? Consider a post-application survey so you can establish what is appealing to recruits and what might cause them to drop out. You’ll be surprised how many people want to share their feedback because a negative job application process can deter an applicant for good, and you could be missing out on the best talent through ignorance. We implemented an Applicant Tracking System to understand the sources our candidates are coming from, see how diverse the candidate pool is (or not), and improve the candidate experience by being able to track how their process progresses and ends. ... Once you’ve got these cyber professionals on board, you need to keep them. 


Why shift left is burdening your dev teams

Security and compliance challenges are a significant barrier to most organizations’ innovation strategies, according to CloudBees. The survey also reveals agreement among C-suite executives that a shift left security strategy is a burden on dev teams. 76% of C-suite executives say that compliance challenges and security challenges (75%) limit their company’s ability to innovate. This is due, in part, to the significant time spent on compliance audits, risks, and defects. At the same time, C-suite executives overwhelmingly favor a shift left approach, a strategy of moving software testing and evaluation to earlier in the development lifecycle, placing the burden of compliance on development teams. In fact, 83% of C-suite executives say the approach is important for them as an organization, and 77% say they are currently implementing a shift left security and compliance approach. This is despite 58% of C-suite executives reporting that shift left is a burden on their developers. “These survey findings underscore the urgent need to transform the software security and compliance landscape. 



Quote for the day:

"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu

Daily Tech Digest - September 16, 2022

The AI-First Future of Open Source Data

If we take it one step further from the GPL for data, we begin to see the value equation of data, or “the data-in-to-data-out ratio” as Augustin calls it. He uses the example of why people are so willing to give up parts of their data and privacy to websites because the small amount of data they’re handing over returns greater value back to them. Augustin sees the data-in-to-data-out ratio as a tipping point in open source data. Calling it one of his application principles, Augustin suggests that data engineers should focus on providing users with more value but take less and less information from them. He also wants to figure out a way never to ask your users for anything. You’re only providing them an advantage. For example, new app users will always be asked for information. But how can we skip that step and collect data directly in exchange for providing value? “Most people are willing to [give up data] because they get a lot of utility back. Think about the ratio of how much you put in versus how much you get back. You get back an awful lot. People are willing to give up so much of their personal information because they get a lot back,” he says.


How User Interface Testing Can Fit into the CI/CD Pipeline

Reliance on manual testing is why organizations can’t successfully implement CI/CD. If CI/CD involves manual processes that cannot be sustained as it slows down the entire delivery cycle. Testing is no longer the sole responsibility of developers or testers only and it takes investment and integration in infrastructure. Developer teams need to focus on building the coverage that is essential. They should focus on testing workflows and not features to be more efficient. Additionally, manual testers who are not developers can still be part of the process, provided that they use a testing tool that gives them the required automation capabilities in a low code environment. For example, with Telerik Test Studio, a manual tester can create an automated test by interacting with the application’s UI in a browser. That test can be presented in a way without code and as a result, they can learn how the code behaves. Another best practice in making UI testing efficient is to be selective with what is included in the pipeline. 


Want to change a dysfunctional culture? Intel’s Israel Development Center shows how

Intel’s secret weapon, one that until recently it did not talk about much, is its Israel Development Center. It is the largest employer in Israel, a nation surrounded by hostile countries, and women and men are treated more equally than in most other countries I’ve studied. They are highly supportive of each other, making it an incredibly supportive country for women in a wide variety of industries. The facility itself is impressively large and well-built and eclipses Intel’s corporate office in both size and security. The work done there really defines Intel’s historic success in both product performance and quality, making it an example of how a company should be run. Surprisingly, the collaborative and supportive country culture overrode the hostile and self-destructive corporate culture that has defined the US tech industry. What Gelsinger has done is showcase the development center as a template for the rest of Intel, as a firm more tolerant of failure, more supportive of women and focused like a laser on product quality, performance and caring for Intel’s customers.


Uber security breach 'looks bad', potentially compromising all systems

While it was unclear what data the ride-sharing company retained, he noted that whatever it had most likely could be accessed by the hacker, including trip history and addresses. Given that everything had been compromised, he added that there also was no way for Uber to confirm if data had been accessed or altered since the hackers had access to logging systems. This meant they could delete or alter access logs, he said. In the 2016 breach, hackers infiltrated a private GitHub repository used by Uber software engineers and gained access to an AWS account that managed tasks handled by the ride-sharing service. It compromised data of 57 million Uber accounts worldwide, with hackers gaining access to names, email addresses, and phone numbers. Some 7 million drivers also were affected, including details of more than 600,000 driver licenses. Uber later was found to have concealed the breach for more than a year, even resorting to paying off hackers to delete the information and keep details of the breach quiet.


What Is GPS L5, and How Does It Improve GPS Accuracy?

L5 is the most advanced GPS signal available for civilian use. Although it’s primarily meant for life-critical and high-performance applications, such as helping aircraft navigate, it’s available for everyone, like the L1 signal. So the manufacturers of mass-market consumer devices such as smartphones, fitness trackers, in-car navigation systems, and smartwatches are integrating it into their devices to offer the best possible GPS experience. One of the key advantages that the L5 signal possesses is that it uses the 1176.45MHz radio frequency, which is reserved for aeronautical navigation worldwide. As such, it doesn’t have to worry about interference from any other radio wave traffic in this frequency, such as television broadcasts, radars, and any ground-based navigation aids. With L5 data, your device can access more advanced methods to determine which signals have less error and effectively pinpoint the location. It’s particularly helpful at areas where GPS signal can be received but is severely degraded.


Digital transformation: How to get buy-in

Today’s IT leader has to be much more than tech-savvy, they have to be business-savvy. IT leaders of today are expected to identify and build support for transformational growth, even when it’s not popular. At Clarios, I included “Challenge the Status Quo, Be a Respectful Activist” to our IT guiding principles, knowing that around any CEO or general manager’s table they need one or two disruptors – IT leaders should be one. However, once that activist IT leader sells their vision to the boss, now they have to drive change in their peers and the entire organization, without formal authority. ... Our IT leaders can gain buy-in on new ideas by actively listening to our business partners. Our focus is to understand from their perspective the challenges impeding their work by rounding in our hospital locations to see first-hand the issues. So when we propose solutions, it is from their perspective. Utilizing these practices, we can bring forth the vision of Marshfield Clinic Health System because we can implement technology that bridges human interaction between our patients and care teams, which is at the heart of healthcare.


How to Prepare for New PCI DSS 4.0 Requirements

There are several impactful changes to the requirements associated with DSS v4.0 compliance, ranging from policy development (all changes will require some level of policy changes), to Public Key Infrastructure (PKI), as there will be multiple changes related to how keys and certificates are managed. Carroll points out there will also be remote access issues, including defined changes to how systems may be accessed remotely, and risk assessments -- now required to multiple and regular “targeted risk assessments” to capture risk in a format specified by the PCI DSS. Dan Stocker, director at Coalfire, a provider of cybersecurity advisory services, points out fintech is growing rapidly, with innovative uses for credit card data. “Entities should realistically evaluate their obligations under PCI," he says. “Use of descoping techniques, such as tokenization, can reduce total cost of compliance, but also limit product development choices.” He explains modern enterprises have multiple compliance obligations across diverse topics, such as financial reporting, privacy, and in the case of service providers, many more.


Building Large-Scale Real-Time JSON Applications

A critical part of building large-scale JSON applications is to ensure the JSON objects are organized efficiently in the database for optimal storage and access. Documents may be organized in the database in one or more dedicated sets (tables), over one or more namespaces (databases) to reflect ingest, access and removal patterns. Multiple documents may be grouped and stored in one record either in separate bins (columns) or as sub-documents in a container group document. Record keys are constructed as a combination of the collection-id and the group-id to provide fast logical access as well as group-oriented enumeration of documents. For example, the ticker data for a stock can be organized in multiple records with keys consisting of the stock symbol (collection-id) + date (group-id). Multiple documents can be accessed using either a scan with a filter expression (predicate), a query on a secondary index, or both. A filter expression consists of the values and properties of the elements in JSON. For example, an array larger than a certain size or value is present in a sub-tree. A secondary index defined on a basic or collection type provides fast value-based queries described below.


Digital self defense: Is privacy tech killing AI?

AI needs data. Lots of it. The more data you can feed a machine learning algorithm, the better it can spot patterns, make decisions, predict behaviours, personalise content, diagnose medical conditions, power smart everything, detect cyber threats and fraud; indeed, AI and data make for a happy partnership: “The algorithm without data is blind. Data without algorithms is dumb.” Even so, some digital self defense maybe in order. But AI is at risk. Not everyone wants to share, at least, not under the current rules of digital engagement. Some individuals disengage entirely, becoming digital hermits. Others proceed with caution, using privacy-enhancing technologies (PETs) to plug the digital leak: a kind karate chop, digital self defense — they don’t trust website privacy notices, they verify them with tools like DuckDuckGo’s Privacy Grade extension and soon, machine-readable privacy notices. They don’t tell companies their preferences; they enforce them with dedicated tools, and search anonymously using AI-powered privacy-protective search engines and browsers like Duck Duck Go, Brave and Firefox. 


Why Mutability Is Essential for Real-Time Data Analytics

At Facebook, we built an ML model that scanned all-new calendar events as they were created and stored them in the event database. Then, in real-time, an ML algorithm would inspect this event and decide whether it is spam. If it is categorized as spam, then the ML model code would insert a new field into that existing event record to mark it as spam. Because so many events were flagged and immediately taken down, the data had to be mutable for efficiency and speed. Many modern ML-serving systems have emulated our example and chosen mutable databases. This level of performance would have been impossible with immutable data. A database using copy-on-write would quickly get bogged down by the number of flagged events it would have to update. If the database stored the original events in Partition A and appended flagged events to Partition B, this would require additional query logic and processing power, as every query would have to merge relevant records from both partitions. Both workarounds would have created an intolerable delay for our Facebook users, heightened the risk of data errors, and created more work for developers and/or data engineers.



Quote for the day:

"Leadership and learning are indispensable to each other." -- John F. Kennedy

Daily Tech Digest - September 15, 2022

AI is playing a bigger role in cybersecurity, but the bad guys may benefit the most

“Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails,” Finch said. “AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools.” Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it’s ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. ... But Finch said, “Given the economics of cyberattacks — it’s generally easier and cheaper to launch attacks than to build effective defenses — I’d say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world.”


Cybersecurity’s Too Important To Have A Dysfunctional Team

With such difficulty recruiting and maintaining staff, one option businesses should consider is training and reskilling programmes for existing staff to help bridge the gap. Current cybersecurity professionals can solidify what they already know and stay up to date on the latest learnings. Along with cybersecurity professionals, other technology professionals can be trained and recruited into these roles. Technology professionals are likely to have an affinity for the types of skills needed to succeed in cybersecurity. Non-technical people by background, may still be able to learn what is needed to perform in these roles, especially if businesses are willing to invest and cover the cost of the training. When there is a skills shortage, as is currently the case, and when vacancies outstrip the available talent, organisations need to be prepared to be imaginative in finding solutions. Alongside this, arming all teams, regardless of their skills and experience, with the right tools and support is essential. Working with knowledgeable and trusted partners can help outsource some of the work and offset any skills gaps as the external partner becomes an extension of the in-house team.


How Sweden goes about innovating

The innovation agency functions much like its counterparts in other countries, similarly to the Finnish Funding Agency for Technology and Innovation (Tekes) in neighbouring Finland, and to the part of the US National Science Foundation (NSF) that does seed funding on the other side of the Atlantic. The Swedish government gives Vinnova more than €300m each year to invest through grants to different kinds of actors, which might be small companies, research institutes, large competence centres, or consortia of companies working together on projects. Vinnova invests this money along 10 different themes, including sustainable industry and digital transformation. To report on the social and economic effects of its funding, the agency produces two impact studies annually. It has also published a document that describes its approach to tracking the impact of investments. “It’s never the case that we’re alone in the responsibility for success or failure,” says Göran Marklund, head of strategic intelligence and deputy director-general at Vinnova. 


Bringing AI to inventory optimization

Chasing today’s consumer patterns is a losing game, he believes. “It’s important to take a long-term view so that the next time the pattern shifts, you’ll be ready,” he said. The antuit.ai solution works by combining the historical data that supply chains have always used as well as new data becoming available, doing it at a scale perhaps not previously used, and then utilizing emerging technologies like AI and machine learning to process that data, make decisions and then learn from the execution of those decisions. “If I’m a retailer buying from CPG companies to service hundreds of stores, I have to make inventory decisions such as what port to land, what distribution centers to send it to, how to allocate it to the stores down to the shelf level and at what price to sell it,” Lakshmanan explained. “Part of my data equation is knowing what has historically sold, at what price, what promotions I ran, how much inventory did I have and whether there were any external factors, like was it raining. Now, if I know it’s going to rain next week, I have backward and forward-looking data that I can put through an algorithm to determine things like what is the likely demand at a store in Plano, Texas.”


Ambient computing has arrived: Here's what it looks like, in my house

Ambient computing is ignorable computing. It's there, but it's in the background, doing the job we've built it to do. One definition is a computer you use without knowing that you're using it. That's close to Eno's definition of his music -- ignorable and interesting. A lot of what we do with smart speakers is an introduction to ambient computing. It's not the complete ambient experience, as it relies on only your voice. But you're using a computer without sitting down at a keyboard, talking into thin air. Things get more interesting when that smart speaker becomes the interface to a smart home, where it can respond to queries and drive actions, turning on lights or changing the temperature in a room. But what if that speaker wasn't there at all, with control coming from a smart home that takes advantage of sensors to operate without any conscious interaction on your part? You walk into a room and the lights come on, because sensors detect your presence and because another set of sensors indicate that the current light level in the room is lower than your preferences.


Most enterprises looking to consolidate security vendors

Cost optimization should not be a driver, Gartner VP analyst John Watts said. Those looking at cutting costs must reduce products, licenses and features, or ultimately renegotiate contracts. A drawback of those pursuing consolidation has been a reduction of risk posture in 24% of cases, rather than an improvement. But if cost savings becomes a result of consolidation, CISOs can invest that on preventing attack surface expansion. “This trend captures a dramatic increase in attack surface emerging from changes in the use of digital systems, including new hybrid work, accelerating use of public cloud, more tightly interconnected supply chains, expansion of public-facing digital assets and greater use of operational technology (cyber physical systems—CPS). Security teams may need to expand licensing, add new features, or point solutions to address this trend,” Watts says to CSO. The time invested should also not be taken for granted. Gartner found that vendor consolidation can take a long time with nearly two-thirds of organizations saying they have been consolidating for three years.


Software-defined perimeter: What it is and how it works

An SDP is specifically designed to prevent infrastructure elements from being viewed externally. Hardware, such as routers, servers, printers, and virtually anything else connected to the enterprise network that are also linked to the internet are hidden from all unauthenticated and unauthorized users, regardless of whether the infrastructure is in the cloud or on-premises. "This keeps illegitimate users from accessing the network itself by authenticating first and allowing access second," says John Henley, principal consultant, cybersecurity, with technology research advisory firm ISG. "SDP not only authenticates the user, but also the device being used. When compared with traditional fixed-perimeter approaches such as firewalls, SDP provides greatly enhanced security. Because SDPs automatically limit authenticated users’ access to narrowly defined network segments, the rest of the network is protected should an authorized identity be compromised by an attacker. "This also offers protection against lateral attacks, since even if an attacker gained access, they would not be able to scan to locate other services," Skipper says.


Assessing the Security Risks of Emerging Tech in Healthcare

How some of these newer technologies are implemented into existing healthcare environments is also a critical security consideration, other experts say. "Smart hospitals have a blend of old technologies and newer innovations, improving the experience for both the patients and the clinicians," says Sri Bharadwaj, chief operating and information officer of Longevity Health Plan and chair-elect of the Association for Executives in Healthcare Information Security, a healthcare CISO professional organization. The key is to realize that legacy technology that is embedded in "newer shiny objects" still has the same security risks that have to be mitigated through strong administrative and technical controls to provide a robust complement to the newer technology, he says. ... "One thing to always keep in mind is that as security leaders our job is to perform due diligence and assess the risk of all services and technologies. We are also to find ways to help mitigate the risk, where possible, and raise the risk awareness to the organization," she says.


7 tell-tale signs of fake agile

When the focus shifts to granular facets of agiles, like Scrum ceremonies, instead of actual content and context, agile’s true principles are lost, says Prashant Kelker, lead partner for digital sourcing and solutions, Americas, at global technology research and advisory firm ISG. Agility is about shipping as well as development. “Developing software using agile methodologies is not really working if one ships only twice a year,” Kelker warns, by way of example. “Agility works through frequent feedback from the market, be it internal or external.” Too often organizations focus on going through the motions without an eye toward achieving business results. Agility is not only about adhering to a methodology or implementing particular technologies; it’s about business goals and value realization. “Insist on key results every six months that are aligned to business goals,” Kelker says. When a team lacks a dedicated product owner and/or Scrum master, it will struggle to implement the consistent agile practices needed to continuously improve and meet predictable delivery goals. CIOs need to ensure they have dedicated team members, and that the product owner and Scrum master thoroughly understand their roles.


Top 10 Microservices Design Principles

Microservices-based applications should have high cohesion and low coupling. The idea behind this concept is that each service should do one thing and do it well, which means that the services should be highly cohesive. These services should also not depend on each other, which means they should have low coupling. The cohesion of a module refers to how closely related its functions are. Having a high level of cohesion implies that functions within a module are inextricably related and can be understood as a whole. Low cohesion suggests that the functions within a module are not closely related and cannot be understood as a set. The higher the cohesion, the better – we may say that the modules are working together. Coupling measures how much knowledge one module has of another. A high level of coupling indicates that many modules know about each other; there is not much encapsulation between modules. The low level of coupling indicates that many modules are encapsulated from one another. When components in an application are loosely coupled, you can test the application easily as well.



Quote for the day:

"To be a good leader, you don't have to know what you're doing; you just have to act like you know what you're doing." -- Jordan Carl Curtis

Daily Tech Digest - September 14, 2022

A vision for making open source more equitable and secure

There have been multiple attempts at providing incentive structures, typically involving sponsorship and bounty systems. Sponsorship makes it possible for consumers of open source software to donate to the projects they favor. Only projects at the top of the tower are typically known and receive sponsorship. This biased selection leads to an imbalance: Foundational bricks that hold up the tower attract few donations, while favorites receive more than they need. In contrast, tea will give package maintainers the opportunity to publish their releases to a decentralized registry powered by a Byzantine fault-tolerant blockchain to eliminate single sources of failure, provide immutable releases, and allow communities to govern their regions of the open-source ecosystem, independent of external agendas. Because of the package manager’s unique position in the developer tool stack—it knows all layers of the tower—it can enable automated and precise value distribution based on actual real-world usage.


Cognitive Overload: The hidden cybersecurity threat

Cognitive overload occurs when workers are trying to take in too much information or execute too many tasks. This typically falls under two areas for cybersecurity analysts: intrinsic load, the piecing together of complex technical information to perform incident response activities; and extraneous load, the other 97% of data in a SIEM that they must filter out, while also handling team conversations and sidebar questions. Ultimately, cognitive overload leads to poor performance levels, a lack of focus, and a lack of fulfillment. This can have particularly detrimental consequences within cybersecurity, where ransomware attacks rose 13% year-over-year – more than the past five years combined. To boot, just under half of senior cyber professionals (45%) have considered quitting the industry altogether because of stress. To accommodate the needs of this critical workforce – and fill the 771,000 cyber positions open today – companies must make easing cognitive overload a top priority. Today, it stems from two major issues. First, organizations typically lack direction in cybersecurity, tasking analysts with a broad and daunting: defend our infrastructure. It’s too abstract and leaves them unsure of their roles and responsibilities. 


Medical device vulnerability could let hackers steal Wi-Fi credentials

A vulnerability found in an interaction between a Wi-Fi-enabled battery system and an infusion pump for the delivery of medication could provide bad actors with a method for stealing access to Wi-Fi networks used by healthcare organizations, according to Boston-based security firm Rapid7. The most serious issue involves Baxter International’s SIGMA Spectrum infusion pump and its associated Wi-Fi battery system, Rapid7 reported this week. The attack requires physical access to the infusion pump. The root of the problem is that the Spectrum battery units store Wi-Fi credential information on the device in non-volatile memory, which means that a bad actor could simply purchase a battery unit, connect it to the infusion pump, and quicky turn it on and off again to force the infusion pump to write Wi-Fi credentials to the battery’s memory. Rapid7 added that the vulnerability carries the additional risk that discarded or resold batteries could also be acquired in order to harvest Wi-Fi credentials from the original organization, if that organization hadn’t been careful about wiping the batteries down before getting rid of them.


Four Action Steps for Shoring Up OT Cybersecurity

Having proactive safeguards in place is important, but it’s also critical to have effective reactive procedures ready to respond to intrusions, especially to quickly restore the integrity of operations, applications, data, or any combination of the three. Key ICS and SCADA functions should be backed up with hot standbys featuring immediate failover capabilities should their primary counterparts be disrupted. For data protection, automated and contemporaneous backups are preferable; or at least they should be done at a weekly interval. Ideally, the backup storage will be off-network and, even better, offsite, too. The former protects backup data in case malware, such as ransomware, succeeds in circumventing defense-in-depth and network segmentation measures and locks it up. ... Like plant health, safety and environment (HSE) programs, cybersecurity should be considered alongside them as a required mainstay risk-reduction program with support from executive management, owners, and the board of directors.


The Future of the Web: The good, the bad and the very weird

The rise of big technology companies over the last two decades has made the internet more usable for most people, but has also lead to the creation of a series of 'walled gardens' controlled by them, within which information is held and not easily relocated. As a result, a small number of very large companies control what you search for online, or where you share information with your friends, or even do your shopping. Even worse, these companies have done much to develop what is effectively 'surveillance capitalism' -- taking the information we have shared with them (about what we do, where we go and who we know) to sell to advertisers and others. As smartphones have become one of the key ways we access the web, that surveillance capitalism now follows us wherever we go. And while the rise of social media (the so-called 'Web 2.0' era) promised to make it possible for individuals to produce and share their own content, it was still mostly the big tech companies that remained the gatekeepers. A platform that was once about openness seems to be dominated by big tech.


Authorization Challenges in a Multitenant System

Restricting users to the data that belongs to their tenant is the most fundamental requirement of multitenant authorization. Tenant isolation barriers are needed to prevent users from accessing sensitive information owned by another account. Such a breach would erode trust in your service and, depending on the type of exposure that occurred, could leave you liable to regulatory penalties. Tenant identification usually occurs early in the lifecycle of a request. Your service should authenticate the user, determine the tenant they belong to, and then limit subsequent interactions to data that’s associated with that tenant. ... Another complication occurs when tenants require unique combinations of roles and actions to mirror their organization’s structures. One org might be satisfied by admin and read-only roles; another may need the admin role to be split into five distinct assignments. The most effective multitenant authorization systems will flexibly accommodate customizations on a per-tenant basis. At the application level, granular permission checks will remain the same; however, the system will need to be configurable so tenants can create their own roles by combining different permissions.


Deployment Patterns in Microservices Architecture

The Multiple Service Instances per Host pattern involves provisioning one or more physical or virtual hosts. Each of the hosts then executes multiple services. In this pattern, there are two variants. Each service instance is a process in one of these variants. In another variant of this pattern, more than one service instance might run simultaneously. One of the most beneficial features of this pattern is its efficiency in terms of resources, as well as its seamless deployment. This pattern has the benefit of having a low overhead, making it possible to start the service quickly. This pattern has the major drawback of requiring a service instance to run in isolation as a separate process. The resource consumption of each instance of a service becomes difficult to determine and monitor when several processes are deployed in the same process. The Service Instance per Host pattern is a deployment strategy in which only one microservice instance can execute on a particular host at a specific time. Note that the host can be a virtual machine or a container running just one service instance simultaneously.


Bursting the Microservices Architectures Bubble

The buzz surrounding microservices in recent years doesn't reflect the sudden emergence of the microservices concept at that time, however. Microservices architectures actually have a long history that stretches back decades. But they didn't really catch on and gain mainstream focus until the early-to-mid 2010s. So, why did everyone go gaga over microservices starting about ten years ago? That's a complex question, but the answer probably involves the popularization around the same time of two other key trends: DevOps and cloud computing. You don't need microservices to do DevOps or use the cloud, but microservices can come in handy in both of these contexts. For DevOps, microservices make it easier in certain important respects to achieve continuous delivery because they allow you to break complex codebases and applications into smaller units that are easier to manage and easier to deploy. And in the cloud, microservices can help to consume cloud resources more efficiently, as well as to improve the reliability of cloud apps.


New Survey Shows 6 Ways to Secure OT Systems

A fundamental principle of OT security is the need to create an air gap between ICS and OT systems and IT systems. This basic network cybersecurity design employs an industrial demilitarized zone to prevent threat actors from moving laterally across systems, but the survey finds that only about half of organizations have an IDMZ within their OT architecture, and 8% are working on it. The healthcare, public health and emergency services sectors were especially behind. Nearly 40% of respondents in those sectors don't have plans to implement an IDMZ. Implementing a DMZ is a basic best practice, Ford says. "The risk is lateral movement where breach can move from IT to OT or vice versa, or from low-value network assets to high-value network assets," Ford says. "The more attackers can penetrate your infrastructure, the greater damage and downtime they can cause. Segmentation in DMZ, demilitarized zones, provide an air gap between IT and OT, and additional segmentation can further protect business-critical assets with strong access controls, firewalls and policy rules based on zero trust." 


Wearable devices: invasion of privacy or health necessity?

Dangling a carrot of free technology is a way to engage customers, but protection is vital should wearable technology be compromised. This data isn’t simply name, address and payment details, but potentially highly personal data about an individuals’ wellbeing. The insurance industry will need to develop solutions that help protect the policyholder, and reassure the individual that their data is secure. With GDPR, UK-GDPR and other regulations globally to be highly considered, insurers are spending considerable time and investment in ensuring data is well protected. The ubiquitous nature of wearables has helped increase engagement with insurance, and customers have been introduced to the numerous health benefits of using these devices. If you’ve already got a device tracking your wellbeing, why would you not want a doctor also doing the same? By becoming an extension to the wearable itself, wearable insurance is likely to be generally accepted by customers.



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - September 13, 2022

Data Analytics: The Ugly, But Crucial Step CEOs Can’t Ignore

Leaders only need to look around to see that the best CEOs are making data central to their business. This has become even more important as companies grapple with rising costs. Good data analytics allow companies to stay on top of their purchases and to roll costs over to their customers, a capability that is proving highly valuable these days in the manufacturing and automotive industries. Companies with low data maturity tend to keep data siloed, using different criteria across departments to collect and interpret it. This leads to missed opportunities from not integrating data to generate information at a granular level. They may know if they just had a good month but are not able to see how that breaks down on a per-item level or how it compares to other periods to give them a better understanding of “why” they had a good month and how they might be able to proactively make decisions to repeat or even further improve results. One manufacturing firm we know recently employed analytics to clean up its data and for the first time obtain a SKU-level visualization of the profitability of each item it sold.


Extended reality — where we are, what’s missing, and the problems ahead

What’s missing for immersion in the VR/MR is full-body instrumentation so you can move and interact in the virtual world(s) as you would in the real world. Hand scanning with cameras on a headset has not been very reliable and the common use of controllers creates a disconnect between how you want to interact with a virtual world and how you must react with it. This is particularly problematic with MR because you use your naked hand for touching real objects and the controller for touching rendered objects, which spoils the experience. Haptics, which Meta and others are aggressively developing, are only a poor stop-gap method; what’s needed is a way to seamlessly bring a person into the virtual world and allow full interaction and sensory perceptions as if it were the real world. AR standalone has had issues with occlusion, which are being worked on by Qualcomm and others. When corrected, rendered objects will look more solid and less like ghostly images that are partially transparent. But the use cases for this class are very well developed, making this the most attractive solution today.


Global companies say supply chain partners expose them to ransomware

Mitigation of ransomware risk should start at the organization level. “This would also help to prevent a scenario in which suppliers are contacted about breaches to pressure their partner organizations into paying up,” according to the research. In the last three years, 67% of respondents who had been attacked experienced this kind of blackmail to force payment. While ransomware mitigation starts inside the firewall, the research suggests that it must then be extended to the wider supply chain to help reduce the risk from the third-party attack surface. One of the best practices to reduce risk is to gain a comprehensive understanding of the supply chain itself, as well as corresponding data flows, so that high-risk suppliers can be identified. “They should be regularly audited where possible against industry baseline standards. And similar checks should be enforced before onboarding new suppliers,” according to the research. Some of the other practices include scanning open-source components for vulnerabilities/malware before they are used and built into CI/CD pipelines, running XDR programs to spot and resolve threats before they can make an impact, running continuous risk-based patching and vulnerability management.


Playwright: A Modern, Open Source Approach to End-To-End Testing

Contrary to other solutions, Playwright doesn’t use the WebDriver protocol. It leverages the Chrome DevTools protocol to communicate with Chromium browsers (Chrome/Edge) directly. This approach allows for more direct and quicker communication. Your tests will be more powerful and less flaky. But Playwright doesn’t stop at Chromium browsers. The team behind the project understood that cross-browser tests are essential for an end-to-end testing solution. They’re heavily invested in providing a seamless experience for Safari and Firefox, as well, and even Android WebView compatibility is in the works. Testing your sites in Chrome, Edge, Firefox and Safari is only a configuration matter. And this saves time and headaches! It’s not only about automating multiple browsers, though. If your tests are hard to write because you have to place countless “sleep” statements everywhere, your test suite will take hours to complete and become a burden. To avoid unnecessary waits, Playwright comes with “auto-waiting.” The idea is simple: Instead of figuring out when a button is clickable by yourself, Playwright performs actionability tests for you.


7 ways to create a more IT-savvy C-suite

Carter Busse, CIO at intelligent automation platform provider Workato, stresses the importance of networking with management peers. Each interaction provides an opportunity to ask questions, listen, and share information and insights. “We lack a water cooler in this remote world, but setting up biweekly meetings with my peers helps me understand their priorities and gives me an opportunity to communicate key knowledge,” Busse says. “These meetings also help build the trust that’s so crucial for success as a CIO.” Knowledge communicated to management peers should align with the enterprise’s basic mission. “As CIOs, we need to share our knowledge of the business first, followed by how the technology initiatives our team is working on are aligned with the company mission,” Busse says. “It’s important to work on a shared level of understanding first to ensure that the message lands.” ... Every enterprise leader has a different relationship to technology as well as a different level of IT knowledge. Creating personalized discussions, specific to both the enterprise and the leader’s role, will help develop a more tech-savvy C-suite, which can lead to improved support and adoption of proposed IT solutions.


Consider a mobile-first approach for your next web initiative

When going mobile first, it’s important to remember that content is king. Designers should focus on surfacing exactly the content a user needs and nothing more. Extra elements tend to distract from the user’s focus on the current task, and productivity suffers when screen real estate is limited. So, while it is typical to show all the options on a desktop view, well-designed mobile applications use context to decide what to show when and just as importantly, what not to show. It doesn’t mean mobile users can’t get to all those fine-grained options, it just means those options that don’t generally support the main use case are hidden behind low-profile UI constructs like collapsible menus and accordions. ... While more common in B2C apps, in recent years many B2B organizations are also taking advantage of mobile-first strategies. Because mobile-first development prioritizes the smallest screen, it effectively shifts focus and tough conversations around core functionality left. By starting with deciding how an app will look and operate on a smartphone before moving on to larger screens and devices, developers, designers and product owners quickly get alignment on what matters to users and customers.


AI Risk Intro 1: Advanced AI Might Be Very Bad

No one knows for sure where the ML progress train is headed. It is plausible that current ML progress hits a wall and we get another “AI winter” that lasts years. However, AI has recently been breaking through barrier after barrier, and so far does not seem to be slowing down. Though we’re still at least some steps away from human-level capabilities at everything, there aren’t many tasks where there’s no proof-of-concept demonstration. Machines have been better at some intellectual tasks for a long time; just consider calculators which are already superhuman at arithmetic. However, with the computer revolution, every task where a human has been able to think of a way to break it down into unambiguous steps (and the unambiguous steps can be carried out with modern computing power) has been added to this list. More recently, more intuition- and insight-based activities have been added to that list. DeepMind’s AlphaGo beat the top-rated human player of Go (a far harder game than chess for computers) in 2016. In 2017, AlphaZero beat both AlphaGo at Go (100-0) and superhuman chess programs at chess, despite training only by playing against itself for less than 24 hours.


Making Hacking Futile – Quantum Cryptography

There are many methods for exchanging quantum mechanical keys. The transmitter sends light signals to the receiver, or entangled quantum systems are employed. The scientists employed two quantum mechanically entangled rubidium atoms in two labs 400 meters apart on the LMU campus in the current experiment. The two facilities are linked by a 700-meter-long fiber optic cable that runs under Geschwister Scholl Square in front of the main building. To create an entanglement, the scientists first stimulate each atom with a laser pulse. Following this, the atoms spontaneously return to their ground state, each releasing a photon. The spin of the atom is entangled with the polarization of its emitted photon due to the conservation of angular momentum. The two light particles travel over the fiber optic cable to a receiver station, where a combined measurement of the photons reveals atomic quantum memory entanglement. To exchange a key, Alice and Bob – as the two parties are usually dubbed by cryptographers – measure the quantum states of their respective atoms.


Digital Transformation: Connecting The Dots With Web3

Let's remove the blindfold and have a look around. We can see that the metaverse of business interactions has multiple businesses or business contexts modeled as interconnected domains (and subdomains). In place of business boundaries naturally becoming system boundaries or bounded domain contexts, we now have systems at the enterprise level. You have spaghetti data integrations primarily driven by these systems and their interfaces. Still, the source of truth is fragmented across these multiple systems—whether it's a core operation, collaboration or content management system. Thanks to the advent of cloud computing, we have some solace in transcending these boundaries through a multitenant software/platform service. It's like we have built this world in silos as concrete islands first and then started erecting bridges as we discover more ways of interaction in the context of exchanging value. In a graph, you can picture systems and their integrations like nodes and edges. The digital transformation blueprint essentially translates to a specification for building the bridge between systems (both internal and external).


Third of IT decision-makers rely on gut feel when choosing network operator

Among the top line findings was that business leaders ranked trustworthiness, professionalism and experience as the top reasons for selecting a network operator. When asked whether consistent and transparent communication or speed (in terms of delivery and operations) was more important to them when choosing a network provider, 64% said communication was by far the prime practical quality required – speed was just 36% of the vote. However, decision-makers in the US are particularly driven by emotion, with 46% attributing more than half of their decision-making processes to it. Also, perceived “quality”, in a network services sense, was a broad and somewhat intangible concept, with no single commonly accepted definition. And while, for most leaders, network quality is a given – with service-level agreements (SLAs) acting as a key safety net – the survey suggested that it does not define or capture all the qualities that matter to decision-makers. In addition to this, 84% of decision-makers thought it should always be possible to speak with a customer services person without using chatbots or automated phone lines. In the US, 90% of leaders were adamant about this.



Quote for the day:

"Nobody is more powerful as we make them out to be." -- Alice Walker

Daily Tech Digest - September 12, 2022

What the 5G Future Means for Digital Workforce Management

Mobile carriers can divide, or “slice” networks into different tracks for different devices or applications. Organizations can enable devices and workstations to have separate networks, all on the same carrier. In practice, this looks a lot like rerouting traffic. A collaborative meeting that requires a lot of bandwidth won’t mean that another team experiences delays or poor network coverage. Organizations can have more control over how they distribute coverage to minimize lost time and productivity. Ultimately, this will make remote work more sustainable. While 2020 may have been the year of transitioning to working remotely, 2021 has proven so far that remote work is here to stay. ... We’re only beginning to see the potential of 5G and AI working in tandem. Recently, IBM partnered with Samsung to leverage AI for mobile devices operating on a 5G network. Their goal was to build a platform that generated alerts for firefighters and law-enforcement officers and addressed issues before they escalated.


The role of organisational culture in data privacy and transparency

In an era of mass personalisation and technological innovation, organisations increasingly need to make consideration of the way they use consumer data a part of their organisational culture. Since the GDPR’s inception back in May 2018, there have been some encouraging findings indicating that consumers are increasingly willing to share their data in exchange for personalised services and improved experiences. In addition, marketers are more confident about their reputation in the eyes of consumers. However, there is still a long way to go to improve consumer trust in marketing and highlight how data can be used as a force for good. Recent Adobe research reveals that over 75 per cent of UK consumers are concerned about how companies use their data. What’s more, an ICO report found that when consumers were asked if they trust brands with their data, little over a quarter (28 per cent) agreed. This proportion must be much larger if businesses are to truly thrive in the digital age. With technologies such as machine learning having a transformative impact on business, there is little doubt that, as they continue to evolve, the data sets they rely on will be key to a competitive advantage.


IoT software trends in 2023

IoT security has become crucial for organisations looking to successfully implement IoT solutions. This is because digital transformation acceleration has led to an influx of devices coming online. With the exponential growth in the number of devices now connected to the internet, the attack surface has also gotten significantly larger. Opportunistic cybercriminals now have more entry points – from insecure connections, and legacy devices to weak digital links – to take control of these IoT devices to spread malware or gain direct access into the network to obtain critical data. For IoT devices, the risks are doubly high for two reasons. Firstly, IoT devices typically do not come with in-built security functions, which makes them an easy target for hackers. Secondly, IoT devices, especially those that are small or light, can be easily misplaced or stolen. Unauthorised users who have gained physical possession of the devices can easily access your network. This is also why cybersecurity is now a huge area of focus for IoT devices and software. On the other hand, failure to secure IoT ecosystems could lead to eroding trust in their potential across the organisation, as well as wasted investment costs.


Microservices to Async Processing Migration at Scale

There are two potential sources of data loss. First: if the Kafka cluster itself were to be unavailable, of course, you might lose data. One simple way to address that would be to add an additional standby cluster. If the primary cluster were to be unavailable due to unforeseen reasons, then the publisher---in this case, Playback API---could then publish into this standby cluster. The consumer request processor can connect to both Kafka clusters and therefore not miss any data. Obviously, the tradeoff here is additional cost. For a certain kind of data, this makes sense. Does all data require this? Fortunately not. We have two categories of data for playback. Critical data gets this treatment, which justifies the additional cost of a standby cluster. The other less critical data gets a normal single Kafka cluster. Since Kafka itself employs multiple strategies to improve availability, this is good enough. Another source of data loss is at publish time. Kafka has multiple partitions to increase scalability. Each partition is served by an ensemble of servers called brokers. One of them is elected as the leader. 


Theroad ahead for workplace transformation with IoT, 5G, and Cloud

Connectivity is the bedrock of IoT solutions, and flexible infrastructure such as 5G can support expanding requirements. 5G also helps reimagine existing use cases and explore newer and transformative use cases that could not be supported by current connectivity technologies. By 2025, forecasts suggest as many as 75 billion IoT connected devices, nearly 3x the number in 2019. Of course, like all other technologies, networks will evolve to be self-optimized with automation, analytics, and artificial intelligence (AI) working across a multivendor cloud ecosystem. Telecom providers will, therefore, need to focus their network engineering efforts on extreme agility at scale, acceleration through execution excellence, and strong thought leadership and innovation. Essentially, IoT, 5G, and cloud technologies will play a crucial role in digital transformation of organizations across industry sectors, and move the enterprises towards Industry 4.0, a term popularized by Klaus Schwab, founder of the World Economic Forum, to represent the evolution of the fourth industrial revolution riding on increasing interconnectivity and smart automation. 


IT services firms face the heat from GCCs in war for talent

GCCs have emerged a serious influencer of tech talent supply as they control over a quarter of the total tech workforce in India, he said. “Deep-pocketed GCCs have the advantage of buying talent at a higher price tag as they are comparatively lower volume talent consumers. GCCs are hence known to trigger wage wars against IT service players and other cohorts of tech, especially on hot and niche skill sets. GCCs have hence not just constricted talent supply funnels but also made it pricier for the IT services sector," said Karanth. Wage hikes apart, GCCs also offer a huge brand pull by allowing fresh hires to directly engage with top global brands. Such talent used to earlier engage with some of these brands as employees of IT services firms on project deployments. According to data shared by Xpheno, 23% of talent from the IT services sector has had one or more career movements over a 12-month period through July. With the tech sector recording high attrition rates, ranging from 8% to 37%, the talent movement rate of 23% is in line with the average attrition rates seen in the industry during the period.


Reining in the Risks of Sustainable Sourcing

Sustainable sourcing starts with a basic requirement: “It’s essential to know who you’re buying from and where you’re buying,” O’Connell says. These decisions impact the environmental footprint -- including exposure to climate change, energy efficiency of the grid, production requirements, and circularity considerations. They also provide some direction about specific vendor or supplier risks. Vetting existing and new suppliers is essential. There’s a need to understand a partner’s sustainability goals and whether the firm is a good match. Their practices -- and their risks -- become part of a buyer’s practices and risks. “The engagement strategy should be tailored to drive collaboration and provide support to help both companies achieve their sustainability goals,” O’Connell explains. Ensuring that suppliers can produce enough plant-based materials, alternative fuels or low-carbon concrete is critical to mapping out a carbon reduction plan. Scarcity is a common problem with alternative materials and products. 


Quiet quitting: 9 IT leaders weigh in

“In some respects, IT leaders should be more concerned about the 'quiet quitters' in their workforce than those who actually leave the organization. Notwithstanding the inherent challenges of losing an employee, IT leaders can at least take proactive steps to replace the role with the appropriate talent and skill sets.The situation is not as clear when it comes to quiet quitters. IT leaders must approach quiet quitters with caution and take steps to determine the underlying root cause for this behavior. If 'disengagement' from work is the trigger, IT leaders must take remedial measures not to lose the employee 'emotionally' even though they are physically there. Physically absent employees are easier to replace than emotionally absent workers. ... “Quiet quitting is synonymous with healthy boundaries. So is this concept a good or bad thing? Should HR leaders be concerned? It boils down to the single-most valuable lesson the pandemic already taught us: managing employees is not what it used to be. Companies have to adapt. Now more than ever, we have to enable employees to succeed in a more autonomous and self-guided way, and part of that is integrating work into employees’ lives, not life into their work. 


US Sanctions Iranian Spooks for Albania Cyberattack

The sanctions are a demonstration that the United States is willing to use its sway over the global financial system to dissuade other governments from cyberattacks against allies, said Dave Stetson, a former attorney-adviser in the Office of the Chief Counsel at the Treasury Department's Office of Foreign Assets Control. Today's sanctions demonstrate "that the U.S. views those cyberattacks against third countries as affecting U.S. national security and foreign policy" and that the White House is prepared to "impose sanctions on the person who perpetuate those attacks," he told Information Security Media Group. Technically, the Specially Designated Nationals list of sanctioned entities only affects American institutions and individuals, but a new addition is actually a global event. Transactions between foreign entities can easily involve U.S. financial institutions. The federal government hasn't been shy about going after banks that do business with sanctioned individuals even if there's just a momentary nexus to an American financial institution, said Stetson, now a partner with law firm Steptoe & Johnson. Foreign banks also have reputational and customer selection concerns, he added.


The role that data will play in our future

Raising trust levels cannot be addressed in isolation: it requires high-level governance principles and guidelines. Governance frameworks (including data governance ones) must be in place if societies are to anticipate and shape the impact of emerging technologies. Their absence would create scenarios where the digital revolution, like all revolutions, eventually devours its own children. The realisation has emerged that if we are not able to leverage technology for bringing out the best in humans, we are potentially headed for scenarios in which society is fractured and some of our core organisational principles, such as democracy, can be perverted. The COVID crisis turned the digitisation priority into a digitisation imperative. In parallel, new tensions have appeared that could lead to a splintering of the Internet (splinternet), for example. Some would even argue that the metaverse that we seen emerging in front of us is already splintered from the start, and that its rapid far-west-style growth will lead to intractable issues if some sort of guiding principles are not adopted soon.



Quote for the day:

"People buy into the leader before they buy into the vision." -- John C. Maxwell