Daily Tech Digest - May 10, 2022

Tackling tech anxiety within the workforce

The average employee spends over two hours each day on work admin, manual paperwork, and unnecessary meetings. As a result, 81% of workers are unable to dedicate more than three hours of their day to creative, strategic tasks — the very work most ill-suited to machines. Fortunately, this is where digital collaboration comes in. When AI is set to automate certain processes, employees are freer to work on what they love, which often also happens to be what they do best. This extra time back then offers more opportunities to learn, create, and innovate on the job. Take Google’s ‘20% time’ rule, for instance. The policy involves Google employees spending a fifth of their week away from their usual, everyday responsibilities. Instead, they use the time to explore, work, and collaborate on exciting ideas that might not pay off immediately, or even at all, but could eventually reveal big business opportunities. It’s a win-win model for almost every business. At worst, colleagues enjoy the time to strengthen team bonds, improve problem-solving skills, and boost their morale. And at best, they uncover incredible ideas that can change the course of the company.


NFTs Emerge as the Next Enterprise Attack Vector

"The most common attacks try to trick cryptocurrency enthusiasts into handing over their wallet’s recovery phrase," he says. Users who fall for the scam often stand to lose access to their funds permanently, he says. "Bogus Airdrops, which are fake promotional giveaways, are also common and ask for recovery phrases or have the victim connect their wallets to malicious Airdrop sites, he adds, noting that many fake Airdrop sites are imitations of real NFT projects. And with so many small unverified projects around, it’s often hard to determine authenticity, he notes. Oded Vanunu, head of product vulnerability at Check Point Software, says what his company has observed by way of NFT-centric attacks is activity focused on exploiting weaknesses in NFT marketplaces and applications. "We need to understand that all NFT or crypto markets are using Web3 protocols," Vanunu says, referring to the emerging idea of a new Internet based on blockchain technology. Attackers are trying to figure out new ways to exploit vulnerabilities in applications connected to decentralized networks such as blockchain, he notes.


The OT security skills gap

Though often the responsibility for OT security is combined with the OT Infrastructure design role, in the OT world this is in my opinion less logical because it is the automation design engineer that has the wider overview of overall business functions in the system. If OT would be like IT, so primarily data manipulation, it makes sense to put the lead with OT infrastructure design. But because OT is not only data manipulation but also initiating various control actions that need to operate within a restricted operating window, it makes sense to give automation design this coordinating role. This is because automation design oversees all three skill elements and has more detailed knowledge of the production process than the OT infrastructure design role. It is very comparable to cyber security in a bank, where the lead role is linked to the overall business process and the infrastructure security is in a more supportive role. Finally, there is the process design role, what are the cyber security responsibilities for this role? First of all the process design role understands all the process deviations that can lead to trouble, and they know what that trouble is, they know how to handle it, and they have set criteria for limiting the risk that this trouble occurs.


Ransomware-as-a-service: Understanding the cybercrime gig economy and how to protect yourself

The cybercriminal economy—a connected ecosystem of many players with different techniques, goals, and skillsets—is evolving. The industrialization of attacks has progressed from attackers using off-the-shelf tools, such as Cobalt Strike, to attackers being able to purchase access to networks and the payloads they deploy to them. This means that the impact of a successful ransomware and extortion attack remains the same regardless of the attacker’s skills. RaaS is an arrangement between an operator and an affiliate. The RaaS operator develops and maintains the tools to power the ransomware operations, including the builders that produce the ransomware payloads and payment portals for communicating with victims. The RaaS program may also include a leak site to share snippets of data exfiltrated from victims, allowing attackers to show that the exfiltration is real and try to extort payment. Many RaaS programs further incorporate a suite of extortion support offerings, including leak site hosting and integration into ransom notes, as well as decryption negotiation, payment pressure, and cryptocurrency transaction services


U.S. White House releases ambitious agenda to mitigate the risks of quantum computing

The first directive, the executive order, seeks to advance QIS by placing the National Quantum Initiative Advisory Committee, the federal government’s main independent expert advisory body for quantum information science and technology, under the authority of the White House. The National Quantum Initiative, established by a law known as the NQI Act, encompasses activities by executive departments and agencies (agencies) with membership on either the National Science and Technology Council (NSTC) Subcommittee on Quantum Information Science (SCQIS) or the NSTC Subcommittee on Economic and Security Implications of Quantum Science (ESIX).” ... The national security memorandum (NSM) plans to tackle the risks posed to encryption by quantum computing. It establishes a national policy to promote U.S. leadership in quantum computing and initiates collaboration among the federal government, industry, and academia as the nation begins migrating to new quantum-resistant cryptographic standards developed by the National Institute of Standards and Technology (NIST).


Industry pushes back against India's data security breach reporting requirements

India's Internet Freedom Foundation has offered an extensive criticism of the regulations, arguing that they were formulated and announced without consultation, lack a data breach reporting mechanism that would benefit end-users, and include data localization requirements that could prevent some cross-border data flows. The foundation also points out that the privacy implications of the rules – especially five-year retention of personal information – is a very significant requirement at a time when India's Draft Data Protection Bill has proven so controversial it has failed to reach a vote in Parliament, and debate about digital privacy in India is ongoing and fierce. Indian outlet Medianama has quoted infosec researcher Anand Venkatanarayanan, who claimed one way to report security incidents to CERT-In involves a non-interactive PDF that has to be printed out and filled in by hand. Venkatanarayanan also pointed out that the rules' requirement to report incidents as trivial as port scanning has not been explained – is it one PDF per IP address scanned, or can one report cover many IP addresses?


When—and how—to prepare for post-quantum cryptography

Consider data shelf life. Some data produced today—such as classified government data, personal health information, or trade secrets—will still be valuable when the first error-corrected quantum computers are expected to become available. For instance, a long-term life insurance contract may already be sensitive to future quantum threats because it could still be active when quantum computers become commercially available. Any long-term data transferred now on public channels will be at risk of interception and future decryption. Because regulations on PQC do not yet exist, the possibility of data transferred today being decrypted in the future does not yet pose a compliance risk. For the moment, far more significant are the future consequences for organizations, for their customers and suppliers, and for those relationships. However, regulatory considerations will also become relevant as the field develops, which could speed up the need for some organizations to act. Just as with data, some critical physical systems developed today ... will still be in use when the first fully error-corrected quantum computer is expected to come online.
If we compare railways with, for example, the banking sector then we see we have some catching up to do but given the fact that we are used to dealing with risks I am confident that this sector is fully able to develop the necessary mechanisms to stay resilient to these new emerging threats. Of course, we can fall victim to some kind of attack someday just like any other organization. It is up to us to be prepared and stay resilient; I am confident we can do that. ... Actually, any technique, tactic, or procedure (TTP) that can be used in other organizations as well. What we will see is, now that our sector is speeding up the digitization process, that the attack surface is broadening and becoming more complex. Trains will become Tesla’s on rails having many connections with other digital services such as the European Rail Traffic Management System (ERTMS) and driving via Automatic Train Automation (ATO). The obvious consequence is that we need to be able to withstand those TTP’s and plan for mitigation in our digital roadmaps. In the most ideal world, we develop our services cybersafe by design and default. There’s work to do there!


How data can improve your website’s accessibility

With an understanding of how data can inform accessibility, it’s time to apply that data towards accessibility improvements. This entails framing your tracked data in the context of Web Content Accessibility Guidelines (WCAG), which provides the latest standards for ensuring web accessibility. ... WCAG 2.1 focuses on five accessibility principles. These are perceivability, operability, understandability, robustness, and conformance. Your KPIs for accessibility should be tied to these features. For example, measure conformance through the number of criteria violations that occur through site testing. This and similar metrics will help you identify areas of improvement. ... Your approach to gathering accessibility data should not be limited to one tool or testing procedure. Instead, diversify your data to ensure quality. Both quantitative and qualitative metrics factor in, including user feedback, numbers of flagged issues, and insights from all kinds of tests and validation procedures. ... The gamut of usability considerations is broader than most testers can accommodate in one go. 


Low Code: Satisfying Meal or Junk Food?

“If low code is treated as strictly an IT tool and excludes the line of business -- just like manual coding -- you seriously run the risk of just creating new technical debt, but with pictures this time,” says Rachel Brennan, vice president of Product Marketing at Bizagi, a low-code process automation provider. However, when no-code and low-code platforms are used as much by citizen developers as by software developers, whether it satisfies the hunger for more development stems from “how” it is used rather than by whom. But first, it's important to note the differences between low-code platforms for developers and those for citizen developers. Low code for the masses usually means visual tools and simple frameworks that mask the complex coded operations that lie beneath. Typically, these tools can only realistically be used for fairly simple applications. “Low-code tools for developers offer tooling, frameworks, and drag-and drop options but ALSO include the option to code when the developer wants to customize the application -- for example, to develop APIs, or to integrate the application with other systems, or to customize front end interfaces,” explains Miguel Valdes Faura



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - May 09, 2022

Does low code make applications overly complex?

To be clear, the inevitable outcome of low code is not necessarily complexity. Just like traditional application development, complexity can and often does make its way into the lifecycle of the product code base. While not inevitable, it is common. There are many steps you can take to reduce complexity in apps regardless of how they are built, which improves performance, scalability, availability, and speed of innovation. Yes, a low code application, like all applications, can become complex, and requires the use of simplification techniques to reduce complexity. But these issues are not tied to the use of low code. They are just as significant in regular product development processes. What low code does increase is the amount of code in your application that was not written directly by your development team. There is more code that was auto-generated by the low code platform, or included in libraries required for your application to function, but was not the product of your developers. Thus there is often more “unknown” code in your application when you use low code techniques. But unknown is not the same thing as complexity.


Ultra-fast Microservices: When Microstream Meets Wildfly

Microservices provide several challenges to software engineers, especially as a first step to facing distributed systems. But it does not mean that we're alone. Indeed there are several tools to make our life easier in the Java world, especially MicroProfile. MicroProfile has a goal to optimize enterprise Java for a microservices architecture. It is based on the Java EE/Jakarta EE standard plus API specifically for microservices such as a REST Client, Configuration, Open API, etc. Wildfly is a powerful, modular, and lightweight application server that helps you build amazing applications. ... Unfortunately, we don't have enough articles that talk about it. We should have a model, even the schemaless databases, when you have more uncertain information about the business. Still, the persistence layer has more issues, mainly because it is harder to change. One of the secrets to making a scalable application is statelessness, but we cannot afford it in the persistence layer. Primarily, the database aims to keep the information and its state.


CPaaS – a technology for the future

What has made CPaaS the go-to method for customer engagement is the ubiquity of cloud technology and how it has transformed the way businesses operate. “Companies had to come up with different ways to interact with customers,” says IDC research VP Courtney Munroe, who points out that in the last few years there has been a steady move to cloud and, in particular, there has been a confluence of mobility and cloud. “More people use smartphones and companies realised that they could develop apps for them,” he says. Steve Forcum, chief evangelist at Avaya, is also aware of the importance of cloud within enterprises looking to engage with customers. “Some customers may keep elements of their communications stack in their datacentres, but more are then infusing cloud-based capabilities,” he says. “We’ve moved to help customers across this spectrum by bringing cloud-based benefits to their datacentres.” But the technology on its own is in second place to the need that companies have to be more responsive to customers. The underlying drive towards CPaaS is the need to offer a more flexible way to interact with customers.


How Should you Protect your Machine Learning Models and IP?

The most concerning threat is frequently “Will releasing this make it easy for my main competitor to copy this new feature and hurt our differentiation in the market?”. If you haven’t spent time personally engineering ML features, you might think that releasing a model file, for example as part of a phone app, would make this easy, especially if it’s in a common format like a TensorFlow Lite flatbuffer. In practice, I recommend thinking about these model files like the binary executables that contain your application code. By releasing it you are making it possible to inspect the final result of your product engineering process, but trying to do anything useful with it is usually like trying to turn a hamburger back into a cow. Just as with executables you can disassemble them to get the overall structure, by loading them into a tool like Netron. You may be able to learn something about the model architecture, but just like disassembling machine code it won’t actually give you a lot of help reproducing the results. Knowing the model architecture is mildly useful, but most architectures are well known in the field anyway, and only differ from each other incrementally.


The new cybersecurity mandate

Bearing security in mind at all times rings true, as it inspires us to think about what the security implications are as we are making changes. On the other hand, it has something of a resemblance to the old premature performance optimization debate. We’re not going to wade into that here (or the test-driven development debate, or any other similar one). I just want to point out that software development is latent with complexity and obstacles to action. Security considerations must be harmonized into the equation. The next bullet point in the fact sheet makes the following statement: “Develop software only on a system that is highly secure and accessible only to those actually working on a particular project.” This one makes the reader pause for a moment. It seems to have arrived at the conclusion that in order to build secure systems, we should build secure systems. If we are patient, the next sentence helps deliver the full meaning: “This will make it much harder for an intruder to jump from system to system and compromise a product or steal your intellectual property.” What the framers of this fact sheet are driving at here is actually something like a rephrasing of zero trust architecture.


US Passes Law Requiring Better Cybercrime Data Collection

The impact of this legislation depends entirely on the usefulness of the taxonomy itself, says Jennifer Fernick, senior vice president and global head of research at security consultancy NCC Group. "The authors of that taxonomy need to meaningfully answer what data points about cybercrime will enable meaningful intervention for the future prevention of these crimes," Fernick, who is also a National Security Institute visiting technologist fellow at George Mason University, tells Information Security Media Group. "It is important, for example, to distinguish at minimum between computer-related crimes that attack human judgment or exploit edge cases in business processes from crime that is enabled through specific hardware or software flaws that can be exploited by criminals attacking an organization's IT infrastructure. In the latter case, it would be valuable in particular to identify the specific software or hardware components, or even specific security vulnerabilities or CVEs, which served as the substrate for the attack, to help inform organizations about where they would most benefit from strengthening their cybersecurity defenses," Fernick says.


How smart data capture is innovating the air travel experience

Using smart data capture on mobile devices has multiple benefits. Unlike fixed scanners, it enables customer service agents to perform multiple tasks anywhere in the airport. Airlines can automate processes such as check-in, security queues, lounge access, and luggage management, providing a modern, sleek impression from the first moment a passenger enters the terminal. Compared with the old approach of using rugged devices at fixed stations, smart data capture on mobile devices delivers significant customer benefits and staff efficiencies. Airport queues have been big news recently, but with staff equipped with smart mobile devices, waiting times can be cut as they can patrol queues and scan IDs, passports and QR codes to speed passengers through check-in and deliver a more personalised experience — accessing details about a passenger’s seat preferences or dietary requirements, for example. Customer service agents using smart mobile devices can easily manage oversized luggage presented at the gate and quickly check it into the hold.


Are Blockchain and Decentralized Cloud Making Much Headway?

Basically, the value of decentralized cloud in its current form boils down to the circumstances and needs of the users. “If you’re setting up a mining node and need some cloud power, why would you want to pay AWS?” Litan asks. A decentralized cloud might be cheaper to run in such cases, she says, which appeals to miners who want cheap computing in order to make money on the margins. At the moment, when many developers write applications, they look to the most readily available cloud service, Litan says, and then wind up deploying on the main blockchain where there is no control over where Ethereum or Bitcoin run. “It’s like saying, ‘Where’s the internet running?’” There is some possibility for blockchain and decentralized cloud to gain more momentum down the road, but for now their impact on the entirety of cloud computing remains rather niche. “It may become more important as people start writing compute-intensive workloads and they want to keep the cost down,” Litan says. Decentralized cloud computing may also be useful for organizations running non-blockchain applications, she says. 


IT hiring: Assumptions and truths about the current talent shortage

It can be difficult to drive growth when teams are stretched and global tensions are high, as they have been for the better part of two years. New process adoption can meet resistance from employees who are already overwhelmed. If and when this happens, a stalemate often follows, and team leaders opt to wait it out, deferring change to another team or another time. ... The pandemic challenged us all to rethink the way we work. Investments in software took the place of physical office space, and teams were pushed to automate repeatable tasks to maintain a pre-pandemic level of efficiency. With the implementation of artificial intelligence and machine learning, workflow improvements can be expedited, lessening the need for as many employees. Technologies like low-code and no-code are easing the burden felt by developers by enabling employees outside of IT to build systems unique to their needs without the slowdown created by a backlog of IT tickets. In turn, this frees the bandwidth for developers to turn toward other pressing concerns like security.


Is it time to fire yourself?

This idea was brought to life when I interviewed Bracken Darrell, the CEO of Logitech International, a computer peripherals manufacturer headquartered in Switzerland and the US. In that conversation, he shared with me the story of how, about five years into his tenure at the company, he asked himself one Sunday night, “Am I the right person for the next five years? On paper, he certainly was, he told me, given that all his changes at the company had lifted the stock about 500%. “On the other hand, I had been involved in every single personnel and strategic decision,” he said. “My disadvantage was that I knew too much, and that I was too embedded in everything we were doing. I just thought to myself that I might be done.” So he decided that night that he was going to fire himself, but he would sleep on the decision. The punchline is that he didn’t fire himself, but he did wake up the next morning with a sense of clarity of what he needed to do: “I have to rehire myself but have no sacred cows. It was super exciting and fun, and I started changing things that I had put in place. Fortunately, I didn’t have to change things radically, but I felt new again.” 



Quote for the day:

"Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - May 08, 2022

Your mechanical keyboard isn't just annoying, it's also a security risk

If this has set you on edge then I have both good and bad news for you. The good news is that while this is fairly creepy, it's unlikely that hackers will be able to break into your private space and place a microphone in close enough proximity to your keyboard without you noticing. The bad news is that there are plenty of other ways that your keyboard could be giving away your private information. Keystroke capturing dongles exist that can be plugged into a keyboard’s USB cable, and wireless keyboards can be exploited using hardware such as KeySweeper, a device that can record keyboards using the 2.4GHz frequency when placed in the same room. There are even complex systems that use lasers to detect vibrations or fluctuations in powerlines to record what's being written on a nearby keyboard. Still, if you're a fan of mechanical keyboards then don't let any of this deter you, especially if you use one at home rather than in a public office environment. It's highly unlikely that you need to take extreme measures in your own home and just about everything comes with a security risk these days.


Relational knowledge graphs will transform business

"There have been many generations of algorithms built that have all been created around the idea of a binary one," said Muglia. "They have two tables with the key to join the two together, and then you get a result set, and the query optimizer takes and optimizes the order of those joins — binary join, binary join, binary join!" The recursive problems such as Fred Jones's permissions, he said, "cannot be efficiently solved with those algorithms, period." The right structure for business relationships, as distinct from data relationships, said Muglia, is a knowledge graph. "What is a knowledge graph?" asked Muglia, rhetorically. He offered his own definition for what can be a sometimes mysterious concept. "A knowledge graph is a database that models business concepts, the relationships between them, and the associated business rules and constraints." Muglia, now a board member for startup Relational AI, told the audience that the future of business applications will be knowledge graphs built on top of data analytics, but with the twist that they will use the relational calculus going all the way back to relational database pioneer E.F. Codd.


We Need to Talk about the Software Engineer Grind Culture

SWE culture can be very toxic. Generally, I found that people who get rewarded within software engineering are those who sacrifice their personal time for their project/job. We reward people who code an entire project in 24 hours (I mean, just think about the popularity of hackathons). I remember watching a TikTok from a tech creator and he said that US software engineers are paid so much not because of what they do during work hours, but because of all of the extra work they do outside of it. Ask yourself: are you paid enough to sacrifice your life outside of work? So many of us are conditioned to this rat race. I realized that this grind has caused me to lose out on any hobbies outside of coding. There are so many software engineers who are also tech creators on the side. Whether they have a twitch channel dedicated to coding, making Youtube videos about coding, or a tech content creator on TikTok, it usually has something to do with this specialization in software engineering. The reason these channels are so successful is because we, as software engineers, have bought into this narrative.


Managing Tech Debt in a Microservice Architecture

This company has a lot of dedicated and smart engineers, which most probably explains how they were able to come up with what they call the technology capability plan. I find the TCP to be a truly innovative community approach to managing tech debt. I've not seen anything like it anywhere else. That's why I'm excited about it and want to share what we have learned with you. Here is the stated purpose of the TCP. It is used by and for engineering to signal intent to both engineering and product, by collecting, organizing, and communicating the ever-changing requirements in the technology landscape for the purposes of architecting for longevity and adaptivity. In the next four slides of this presentation, I will show you how to foster the engineering communities that create the TCP. You will learn how to motivate those communities to craft domain specific plans for paying down tech debt. We will cover the specific format and purpose of these plans. We will then focus on how to calculate the risk for each area of tech debt, and use that for setting plan priorities. 


Shedding Light On Toil: Ways Engineers Can Reduce Toil

More proactive monitoring is another way to reduce toil, according to Englund and Davis. “Responding to a crash loop is responding too late,” added Davis. Instead, he advocated that SREs look toward leading indicators that suggest the potential for failure so that teams can make adjustments well before anything drastic occurs. If SLIs like error rate and latency are getting bad, you must take reactive measures to fix them, causing more toil. Instead, proactive monitoring is best to see the cresting wave before the flood. Leading indicators could arise from following things like data queue operations connected to servers or the saturation of a particular resource. “If you can figure out when you’re about to fail, you can be prepared to adapt,” said Davis. One major caveat of standardization is that you’re inevitably going to encounter edge cases that require flexibility. And when an outage or issue does arise, the remediation process is often very unique from case to case. As a result, not all investment into standardization pays out. Alternatively, teams that know how to improvise together are proven to be better equipped for unforeseen incidents


Are your SLOs realistic? How to analyze your risks like an SRE

You can reduce the impact on your users by reducing the percentage of infrastructure or users affected or the requests (e.g., throttling part of the requests vs. all of them). In order to reduce the blast radius of outages, avoid global changes and adopt advanced deployments strategies that allow you to gradually deploy changes. Consider progressive and canary rollouts over the course of hours, days, or weeks, which allow you to reduce the risk and to identify an issue before all your users are affected. Further, having robust Continuous Integration and Continuous Delivery (CI/CD) pipelines allows you to deploy and roll back with confidence and reduce customer impact. Creating an integrated process of code review and testing will help you find the issues early on before users are affected. Improving the time to detect means that you catch outages faster. As a reminder, having an estimated TTD expresses how long until a human being is informed of the problem.


5 Ways to Drive Mature SRE Practices

Project failure — and the way it’s regarded within the organization — is often as important as success. To create maximum value, SREs must be free to experiment and work on strategic projects that push the boundaries, understanding they will fail as often as they succeed. However, according to the “State of SRE Report,” only a quarter of organizations accept the “fail fast, fail often” mantra. To mature their practice, enterprises must free SREs from the traditional cost constraints placed upon IT and encourage them to challenge accepted norms. They should be setting new benchmarks for innovative design and engineering practices, not be bogged down in the minutiae of development cycles. Running hackathons and bonus schemes focused on reliability improvements is a great way to uplevel SREs and encourage an organizational culture of learning and experimentation, where failure is valued as much as success. Measurement is critical to developing any IT program, and SRE is no exception. To truly understand where performance gaps are and optimize critical user journeys, SREs need to go beyond performance monitoring data.


The Future of Data Management: It’s Already Here

Data fabric can automatically detect data abnormalities and take appropriate steps to correct them, reducing losses and improving regulatory compliance. A data fabric enables organizations to define governance norms and controls, improve risk management, and improve monitoring—something that is increasing in importance given legal standards for data governance and risk management have become more demanding and compliance/governance vital. It also enhances cost savings through the avoidance of potential regulatory penalties. A data fabric represents a fundamentally different way of connecting data. Those who have adopted one now understand that they can do many things differently, providing an excellent route for enterprises to reconsider a host of issues. Because data fabrics span the entire range of data work, they address the needs of all constituents: developers, business analysts, data scientists, and IT team members collectively. As a result, POCs will continue to grow across departments and divisions. 


Why Data Catalogs Are the Standard for Data Intelligence

Gartner positions a data catalog as the foundation “to access and represent all metadata types in a connected knowledge graph.” To illustrate, I’ll share a personal experience about why I think a data catalog is crucial to data intelligence. Some years ago, when I worked at a large global technology company, my manager said, “I want you to figure out what metrics we should measure and tell us if our product is making our customers successful. We don’t have the data or analysis today.” I was surprised. How could that be? How can a successful enterprise not have the data model in place to measure a market-leading product? Have they based their decisions on gut instinct? As part of my work, I had to create some hypotheses, gather data, analyze it, and create a recommendation. To start, I had to find an expert who had a significant amount of tribal knowledge and could explain what data existed, where it was located, what it meant, how I should use it, and what pitfalls I might encounter when using it. Next, I had to get the data from the data warehouse and write a lot of SQL queries, all while finding the data science people to get their help.


An enterprise architecture approach to ESG

Often, and especially when looked at through a holistic enterprise architecture approach, achieving or reporting on certain ESG goals (or seizing on innovative new opportunities that ESG brings about) will not be possible through isolated tech changes, but in fact, require a more holistic digital transformation. An EA-supported ESG assessment will give an accurate view of the costs and benefits of an organisation's overall IT portfolio. Architecture lenses will then help to make the decisions necessary for ESG-related digital investment and/or transformation. For example, the high energy footprint of business IT systems is becoming an increasing focus of ESG concern.6,7 As a consequence, organisations are feeling significant pressure to move to ‘clean-IT,' optimising the trade-off between energy consumption and computational performance, and incorporating algorithmic and computational efficiencies in IT solutions and designs. Meeting ESG future states will likely require digitalisation and emerging technologies such as IoT, digital twins, big data, and AI. 



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - May 07, 2022

The term 'digital transformation' needs a makeover: What would you rename it?

“New Ways of Working (NWoW) is our term. Of course, New Ways of Working requires quite a few catalysts in the form of culture and technology. "Culture: Retool your leadership in new ways of leading before you demand your organization be agile. Agile teams are empowered, cross-functional, and have the ability to move quickly and test and learn. The role of the leader is not to tell teams what to do but to create a fertile environment to innovate. The role of the leader is to create the outcomes and eliminate barriers. Train your leaders in these new ways of leading before you send your teams off to be agile. "Technology: Focus on agile infrastructure and data before you demand an agile work environment. Creating agile teams that are cross-functional and empowered is a good step. But this only works if you have embarked on your technical transformation and created the highways to safely and continuously deploy software. The combination of culture, technology, and agility is creating NWoW." -John Marcante, Retired CIO, Vanguard


How Weak Analogies About Software Can Lead Us Astray

Software development/design teams are simultaneously understanding problems while solving them. The team makes dozens of choices every day, ideally informed by business objectives and user testing and applied architecture and data cleanliness. ... Likewise, UX design frameworks are usually interpreted by team-level designers to fit the problem at hand. We’re constantly trading off consistent look and feel across the application suite against what will help users at this step. So in the software business, we’re usually solving and designing and implementing and fixing all at the same time. The hard part isn’t the typing, it’s the thinking. ... So hiring junior developers or offshoring to lower the average engineering rate misses what’s most important. Crafting better software should get us more customers and make us more money. Small teams of empowered developers/designers/product managers with deep understanding of real customer problems will out-earn large teams doing contextless color-by-number implementation of specs. The intrinsic quality of the work matters, which is lost in a command-and-control organization.


The key skills needed to build diversity, equality, inclusion and belonging in the workplace

It’s up to executives to treat DEIB as a central business function, instituting and scaling their efforts. Degreed CEO Dan Levin, for example, describes it as a strategic imperative to integrate DEIB into all aspects of how we operate as a business, including at board level. ... Managers need to take big picture initiatives from the C-suite and use them to allocate work and opportunities in new ways. Those adept at these skills help their staff resolve conflicts and open their minds to new ideas. ... Two skills are especially important for both senior leaders and managers, study authors Stacia Garr and Priyanka Mehrotra write in the report. Respondents at higher-ranked companies for DEIB were more likely to say that people in both positions should excel at challenging the status quo and persuasion. I’ve seen leaders and managers faced with the task of convincing those under them to reconsider how their behaviors or words might make someone else feel excluded. Those who excel at these types of challenges have the skills to do so.


How Big Companies Kill Ideas - And How To Fight Back

Google said all the right things. Then over time — after like the first six months — it became like the Tinder Swindler. I was like, “What happened? Where is all this great stuff you said we were going to have?” It went out the window. Over time we were just one toy in the toy box. When you are bought for $3.2 billion, you would think people would actually respect and invest in the team as a new area of Google’s business. That is not how it worked. Apple is a whole different story, at least when Steve [Jobs] was there. It was respected when you did stuff. People took note and tried to make successes. It was my mistake. I did not realize that Google had gone through many of those billion-dollar acquisitions and just let them flail. They just said, “Oh, that was a fun ride. Moving on.” There was no existential crisis because you always had the ad money tree from search. Then it was just a matter of cutting their losses, as opposed to seeing that these are real people with families, trying to do right on the mission to build this thing. They just saw it more as dollars, at least from the finance side. 


Maintaining a Security Mindset for the Cloud Is Crucial

When you look at networking and security, that really hasn’t kept up with the pace of the application transitions to the cloud. And if you look at what happens today, is many of these networks — and network and security elements in those networks — they are do-it-yourself. And the idea that the organizations are migrating, [that] we would be migrating from this do-it-yourself approach to as-a-service approach really allows the organizations to unleash the agility and the simplification that their organizations and enterprises are looking for. Now we have a lot of examples. Even in very recent times where these do-it-yourself approaches have failed to address the needs of the organizations, and one of the most prominent examples in the recent past is a variety of ransomware attacks. We all know that these ransomware attacks have been in the headlines in the recent news. Think about the reasons for these ransomware attacks. There could be many reasons. But one reason that I can think about is that the organizations that are hit by these ransomware attacks, and again, it’s not always black and white


The design of a data governance system

A data governance system should restore control of data to the consumers and businesses generating it, according to this BIS Paper. Technological developments over the last two decades have led to an explosion in the availability of data and their processing. Consumers often do not know the benefits of the data they generate, and find it difficult to assert their rights regarding the collection, processing and sharing of their data. We propose a data governance system that restores control to the parties generating the data, by requiring consent prior to their use by service providers. The system should be open, with consent that is revocable, granular, auditable, and with notice in a secure environment. Conditions also include purpose and use limitation, data minimisation, and retention restriction. Trust in the system and widespread adoption are enhanced by mandating specialised data fiduciaries. The experience with India's Data Empowerment Protection Architecture (DEPA) suggests that such a system can operate at scale with low transaction costs.


Embracing culture change on the path to digital transformation

We did realize that if we didn't get the culture embedded that we would not be successful. So building that capability and building the culture was number one on the list. It was five years ago. It feels like a very long time ago to me. But we started that process and through the cloud guild we trained 7,000 people in cloud and 2,700 of those today are industry certified and working in our teams. So we've made really good progress. We've actually moved a lot of the original teams that were a bit hesitant, a bit concerned about having to move to this whole new way of working. And remember that our original teams didn't have a lot of tech skills, so to tell them that they were going to have to take on all of this technical accountability, an operational task that had previously been handed to our outsourcers, was daunting. And the only way we were going to overcome that was to build confidence. And we built confidence through education, through a lot of cultural work, a lot of explaining the strategy, a lot of explaining to people what good looked like in 2020, and how we were going to get to that place.


6 blockchain use cases for cybersecurity

Blockchain technology digitizes and distributes record-keeping across a network, so transaction verification processes no longer rely on a single central institution. Blockchains are always distributed but vary widely in permissions, sizes, roles, transparency, types of participants and how transactions are processed. A decentralized structure offers inherent security benefits because it eliminates the single point of failure. Blockchains are also composed of several built-in security qualities, such as cryptography, public and private keys, software-mediated consensus, contracts and identity controls. These built-in qualities offer data protection and integrity by verifying access, authenticating transaction records, proving traceability and maintaining privacy. These configurations enhance blockchain's position in the confidentiality, integrity and availability triad by offering improved resilience, transparency and encryption. Blockchains, however, are designed and built by people, which means they're subject to human error, bias or exposure based on use case, subversion and malicious attacks.


Secrets to building a healthy CISO-vendor partnership

Any partnership is a two-way street, so as well as knowing what they are looking for themselves, it’s also important for CISOs to understand what a security vendor needs from them in return. “To build a strong relationship and deliver the best experience possible, we need our customers to be open and honest with us,” Rech says. “This honesty should extend to being clear on which other vendors are in the mix as they’re increasingly relying on flexible, cloud-native, open solutions.” The reality is that no one vendor can guarantee protection against every threat, Rech adds, but vendors are uniquely positioned to adapt to a business’s needs when they have full clarity of what those needs are. For example, constantly sharing information on threat groups, attack techniques or sector-specific threat trends can be overwhelming for some CISOs. “When we know more about their business and their priorities, we can direct the most relevant, need-to-know information to them.” Hellickson thinks vendors also benefit from reasonable, respectful feedback during a sales process that can become somewhat frustrating for CISOs.


Top 10 business needs driving IT spending today

“Cybersecurity [spend] has always been growing, but it has transformed from perimeter security that we’ve been used to for 40 years to more and more securing cloud and remote work and remote employees,” says John Lovelock, research vice president and distinguished analyst at Garner. “Companies that used to be able to put the virtual brick walls around the building and say they’re secure on the inside now have too many openings — to the cloud, partners, customers, employees — for that strategy to be viable.” ... Other big business needs driving IT spending increases — such as boosting efficiency, customer experience, employee productivity, and profitability — also say something about where organizations are in 2022, experts say. “You have an enhanced discipline about cost management now and being smart about where you spend your tech dollars,” Priest says, adding that “it’s one of the best places to invest, especially in inflationary periods.” He says organizations are looking to automate, streamline operations, and reduce costs to help deal with an unsettled labor market, worker shortages, inflation, and geopolitical uncertainty. 



Quote for the day:

"When we lead from the heart, we don't need to work on being authentic we just are!" -- Gordon Tredgold

Daily Tech Digest - May 06, 2022

If you want to make it big in tech, these are the skills you really need

Technical skills are not the only thing businesses need. Increasingly, employers are looking for candidates with the qualities and attributes that can bring teams together, make them more productive, and help companies navigate a work landscape that can change at a moment's notice: qualities that have proven indispensable in getting employers through the tumult of the COVID-19 pandemic.  ... According to the recruitment specialist, tech workers, particularly at middle and senior levels, are now expected to be business partners, and as such they need to be able to clearly communicate their strategies, activities and the impact of those on the wider business. This means good communication skills and interpersonal skills are more valuable than ever – particularly for companies that have had to adopt or scale out digital solutions quickly in response to pandemic-era working. "There are businesses out there that are tech businesses now that perhaps weren't before," says Phil Boden, Robert Half's director of permanent placement services, technology. 


Is Storage-as-Code the Next Step in DevOps?

“Large storage teams and IT organizations are looking to move into this kind of model,” he said. “People are excited to get out of that drudgery piece and build something as code.” And while developers aren’t the decision-makers or the budget holders for the storage market, Ferrario says, they are also a key influencer audience. “The IT developer knows they are responsible for building and automating their own infrastructure services,” he said. “And while they don’t hold the purse strings, they are the executors.” This is a logical trend to follow the popular Kubernetes abstraction, Ferrario said; there’s a widespread demand for infrastructure to be generic enough for everyone to access what they need to build, without having to bug infrastructure engineers all the time. Move faster, with guardrails and policy in place. “If you look at the origin of the cloud operating model years ago, the infrastructure that you as a developer or app owner need is on-demand — and you don’t have to worry about what’s going on behind the scenes,” Ferrario said. But when it’s on-premises, the process is still manual. “You need that Infrastructure-as-a-Service in place, with policy definition and so on.”


3 ways building digital acumen can impact business success

Seeking to build digital acumen skills across the organization has provided several opportunities for cross-functional career moves and peer mentoring. Our IT colleagues are taking opportunities to lead and hone the soft skills they need today, like design thinking and agile working methods. In our manufacturing plants, for instance, digital procedures help to minimize the potential for human error because they strengthen our work processes and improve reliability. This data is vital to making timely decisions, whether someone is performing maintenance or an inspection. Our IT team is teaching plant employees how to use those tools because they play a critical role in developing the capabilities and maintaining them in the long term. With 130 different manufacturing sites with multiple plants at each site and tens of thousands of procedures, it has a key impact on productivity and reliability when employees have digital skills on the field versus needing to rely on the IT organization. Other areas in which our IT team is helping to build digital acumen include sales, marketing, and public affairs. 


Can't Fight That REvil Ransomware Feeling Anymore?

None of REvil's likely now-former, core members appear to have been brought to justice. Perhaps that's because they reside in Russia, which has historically ignored cybercrime, provided the criminals never hack Russia or its neighbors, as well as do the occasional favor in return. The new version of REvil's business plan may simply be to bring that name recognition to bear as the group attempts to scare as many victims as possible into paying a seven-figure ransom. The ideal scenario for criminals is that victims pay, quickly and quietly, to avoid news of the attack becoming public, which helps attackers by making their efforts more difficult for law enforcement agencies to trace. If the ransomware group now using the REvil brand name can keep the operation afloat for even a month before again getting disrupted by law enforcement agencies, its members stand to make a serious profit, so long as they remain out of jail long enough to spend it. Unfortunately, the odds are on REvil Rebooted's side. 


9 most important steps for SMBs to defend against ransomware attacks

Investigate whether you can retire out of date servers. Microsoft recently released a toolkit to allow customers to possibly get rid of the last Exchange Server problem. For years the only way to properly administer mailboxes in Exchange Online where the domain uses Active Directory (AD) for identity management was to have a running Exchange Server in the environment to perform recipient management activities. ... The role eliminates the need to have a running Exchange Server for recipient management. In this scenario, you can install the updated tools on a domain-joined workstation, shut down your last Exchange Server, and manage recipients using Windows PowerShell. ... Investigate the consultants and their access. Attackers look for the weak link and often that is an outside consultant. Always ensure that their remote access tools are patched and up to date. Ensure that they understand that they are often the entry point into a firm and that their actions and weaknesses are introduced into the firm as well. Discuss with your consultants what their processes are.


Delta: A highly available, strongly consistent storage service using chain replication

Fundamentally, chain replication organizes servers in a chain in a linear fashion. Much like a linked list, each chain involves a set of hosts that redundantly store replicas of objects. Each chain contains a sequence of servers. We call the first server the head and the last one the tail. The figure below shows an example of a chain with four servers. Each write request gets directed to the head server. The update pipelines from the head server to the tail server through the chain. Once all the servers have persisted the update, the tail responds to the write request. Read requests are directed only to tail servers. What a client can read from the tail of the chain replicates across all servers belonging to the chain, guaranteeing strong consistency. ... Delta supports horizontal scalability by adding new servers into the bucket and smartly rebalancing chains to the newly added servers without affecting the service’s availability and throughput. As an example, one tactic is to have servers with the most chains transfer some chains to new servers as a way to rebalance the load.


Leaving cloud scalability to automation

The pushback on automated scalability, at least “always” attaching it to cloud-based systems to ensure that they never run out of resources, is that in many situations the operations of the systems won’t be cost-effective and will be less than efficient. For example, an inventory control application for a retail store may need to support 10x the amount of processing during the holidays. The easiest way to ensure that the system will be able to automatically provision the extra capacity it needs around seasonal spikes is to leverage automated scaling systems, such as serverless or more traditional autoscaling services. The issues come with looking at the cost optimization of that specific solution. Say an inventory application has built-in behaviors that the scaling automation detects as needing more compute or storage resources. Those resources are automatically provisioned to support the additional anticipated load. However, for this specific application, behaviors that trigger a need for more resources don’t actually need more resources. 


Ethernet creator Metcalfe: Web3 will have all kinds of 'network effects'

Metcalfe is still refining his pitch for his Law and learning at the same time. "There are going to be all kinds of network effects in Web3," said Metcalfe, during an informal gathering in Williamsburg, Brooklyn, on the sidelines of The Knowledge Graph conference, a conference where enthusiasts of knowledge graphs share technology and techniques and best practices. "For the first time, I am trying to say exactly what kinds of value are created by networks," Metcalfe told ZDNet at the Williamsburg event. "What I have learned today is that knowledge graphs can go a lot farther if they are decentralized," said Metcalfe. "The key is the connectivity." Earlier in the day, Metcalfe had given a talk at the KGC main stage, "Network Effects in Web3." In the talk, Metcalfe explained that "networks are valuable," in many ways. They offer value as "collecting data," said Metcalfe, the ability to get data from many participants. There was also sharing value, sharing disk drives, say, or sharing files. Netflix, said Metcalfe, has "distribution value — they distribute content and it's valuable."


NOAA seeks input on new satellite sensors and digital twin

“The ultimate goal is to improve the forecast skills of NOAA,“ Sid Boukabara, principal scientist at NOAA’s Satellite and Information Service Office of System Architecture and Advanced Planning, told SpaceNews. “These technologies have the potential to take us a leap forward in our ability to provide good data to our customers.” Gathering data in the microwave portion of the electromagnetic spectrum is a key ingredient of accurate weather forecasts. NOAA currently relies on the Northrop Grumman Advanced Technology Microwave Sounder, which gathers data in 22 channels, flying on polar-orbiting weather satellites. Future microwave sounders could “sample at a much higher spectral resolution and would have potentially hundreds of channels,” Boukabara said. “By having a lot more channels, we will be able to better measure the temperature and moisture in the atmosphere.” Measuring the vertical distribution of atmospheric wind from space is another NOAA goal. For now, meteorologists determine wind direction and intensity by observing the motion of moisture in the atmosphere.


4 Database Access Control Methods to Automate

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.



Quote for the day:

"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright

Daily Tech Digest - May 05, 2022

Being a responsible CTO isn’t just about moving to the cloud

The reasons for needing to be a responsible CTO are just as strong as the need to be a tech-savvy one if a company wants to thrive in a digital economy. There are many facets to being a responsible CTO, such as making sure that code is being written in a diverse way, and that citizen data is being used appropriately. In a BCS webinar, IBM fellow and vice-president for technology in EMEA, Rashik Parmar, summarised that the three biggest forces driving unprecedented change today included post-pandemic work; digitalisation; and the climate emergency. With many organisations turning to technology to help solve some of the biggest challenges they’re facing today, it’s clear that there will need to be answers about how this tech-heavy economy will impact the environment. It makes sense that this is often the first place that a CTO will start when deciding how to drive a more responsible future. ... If we focus on the environmental considerations, it’s becoming more commonly known that whilst a move to the cloud may be better for reducing an organisation’s carbon emissions than running multiple on-premises systems, the initiative alone isn’t going to spell good news for climate change.


Frozen Neon Invention Jolts Quantum Computer Race

The group's experiments reveal that within optimization, the new qubit can already stay in superposition for 220 nanoseconds and change state in only a few nanoseconds, which outperform qubits based on electric charge that scientists have worked on for 20 years. "This is a completely new qubit platform," Jin says. "It adds itself to the existing qubit family and has big potential to be improved and to compete with currently well-known qubits." The researchers suggest that by developing qubits based on an electron's spin instead of its charge, they could develop qubits with coherence times exceeding one second. They add the relative simplicity of the device may lend itself to easy manufacture at low cost. The new qubit resembles previous work creating qubits from electrons on liquid helium. However, the researchers note frozen neon is far more rigid than liquid helium, which suppresses surface vibrations that can disrupt the qubits. It remains uncertain how scalable this new system is—whether it can incorporate hundreds, thousands or millions of qubits.


AI for Cybersecurity Shimmers With Promise, but Challenges Abound

There are definitely differences in opinions between business executives, who largely consider AI to be a perfect solution, and security analysts on the ground, who have to deal with the day-to-day reality, says Devo's Ollmann. "In the trenches, the AI part is not fulfilling the expectations and the hopes of better triaging, and in the meantime, the AI that is being used to detect threats is working almost too well," he says. "We see the net volume of alerts and incidents that are making it into the SOC analysts hands is continuing to increase, while the capacity to investigate and close those cases has remained static." The continuing challenges that come with AI features mean that companies still do not trust the technology. A majority of companies (57%) are relying on AI features more or much more than they should, compared with only 14% who do not use AI enough, according to respondents to the survey. In addition, few security teams have turned on automated response, partly because of this lack of trust, but also because automated response requires a tighter integration between products that just is not there yet, says Ollman.


Concerned about cloud costs? Have you tried using newer virtual machines?

“Customers are willing to pay more for newer GPU instances if they deliver value in being able to solve complex problems quicker,” he wrote. Some of this can be chalked up to the fact that, until recently, customers looking to deploy workloads on these instances have had to do so on dedicated GPUs, as opposed to renting smaller virtual processing units. And while Rogers notes that customers, in large part, prefer to run their workloads this way, that may be changing. Over the past few years, Nvidia — which dominates the cloud GPU market — has, for one, introduced features that allow customers to split GPUs into multiple independent virtual processing units using a technology called Multi-instance GPU or MIG for short. Debuted alongside Nvidia’s Ampere architecture in early 2020, the technology enables customers to split each physical GPU into up to seven individually addressable instances. And with the chipmaker’s Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multi-tenancy, which open the door to their use in confidential computing environments.


Attackers Use Event Logs to Hide Fileless Malware

The ability to inject malware into system’s memory classifies it as fileless. As the name suggests, fileless malware infects targeted computers leaving behind no artifacts on the local hard drive, making it easy to sidestep traditional signature-based security and forensics tools. The technique, where attackers hide their activities in a computer’s random-access memory and use a native Windows tools such as PowerShell and Windows Management Instrumentation (WMI), isn’t new. What is new is new, however, is how the encrypted shellcode containing the malicious payload is embedded into Windows event logs. To avoid detection, the code “is divided into 8 KB blocks and saved in the binary part of event logs.” Legezo said, “The dropper not only puts the launcher on disk for side-loading, but also writes information messages with shellcode into existing Windows KMS event log.” “The dropped wer.dll is a loader and wouldn’t do any harm without the shellcode hidden in Windows event logs,” he continues. “The dropper searches the event logs for records with category 0x4142 (“AB” in ASCII) and having the Key Management Service as a source.


Fortinet CEO Ken Xie: OT Business Will Be Bigger Than SD-WAN

"We definitely see OT as a bigger market going forward, probably bigger than SD-WAN," Xie tells investors Wednesday. "The growth is very, very strong. We do see a lot of potential, and we also have invested a lot in this area to meet the demand." Despite its potential, Fortinet's OT practice today is considerably smaller than its SD-WAN business, which has been a company priority for years. SD-WAN accounted for 16% of Fortinet's total billings in the quarter ended Dec. 31 while OT accounted for just 8% of total billings over that same time period. Fortinet last summer had the second-largest SD-WAN market share in the world, trailing only Cisco. Fortinet's OT success coincides with growing demand from manufacturers, which CFO Keith Jensen says is the one vertical that continues to stand out for the company. ... "The strength in manufacturing really speaks to the threat environment, ransomware, OT, and things of that nature," Jensen says. "Manufacturing is trying desperately to break into the top five of our verticals and it's getting closer and closer every quarter."


Meta has built a massive new language AI—and it’s giving it away for free

Meta AI says it wants to change that. “Many of us have been university researchers,” says Pineau. “We know the gap that exists between universities and industry in terms of the ability to build these models. Making this one available to researchers was a no-brainer.” She hopes that others will pore over their work and pull it apart or build on it. Breakthroughs come faster when more people are involved, she says. Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t. In more than 100 pages of notes, the researchers log every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022. With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. 


Tackling the threats posed by shadow IT

Shadow IT can be tough to mitigate, given the embedded culture of hybrid working in many organizations, in addition to a general lack of engagement from employees with their IT teams. For staff to continue accessing apps securely from anywhere, at any time, and from any device, businesses must evolve their approach to organizational security. Given the modern-day working environment moves at such a fast pace, employees have turned en masse to shadow IT when the experience isn’t quick or accurate enough. This leads to the bypassing of secure networks and best practices and can leave IT departments out of the process. A way of controlling this is by deploying corporate managed devices that provide remote access, giving IT teams most of the control and removing the temptation for employees to use unsanctioned hardware. Providing them with compelling apps, data, and services with a good user experience should see a reduced dependence on shadow IT, putting IT teams back in the driving seat and restoring security. 


5 AI adoption mistakes to avoid

Every AI-related business goal begins with data – it is the fuel that enables AI engines to run. One of the biggest mistakes companies make is not taking care of their data. This begins with the misconception that data is solely the responsibility of the IT department. Before data is captured and input into AI systems, business subject matter experts and data scientists should be looped in, and executives should provide oversight to ensure the right data is being captured and maintained appropriately. It’s important for non-IT personnel to realize they not only benefit from good data in yielding quality AI recommendations, but their expertise is a critical input to the AI system. Make sure that all teams have a shared sense of responsibility for curating, vetting, and maintaining data. Data management procedures are also a key component of data care. ... AI requires intervention to sustain it as an effective solution over time. For example, if AI is malfunctioning or if business objectives change, AI processes need to change. Doing nothing or not implementing adequate intervention could result in AI recommendations that hinder or act contrary to business objectives.


SEC Doubles Cyber Unit Staff to Protect Crypto Users

The SEC says that the newly named Crypto Assets and Cyber Unit, formerly known as the Cyber Unit, in the Division of Enforcement, will grow to 50 dedicated positions. "The U.S. has the greatest capital markets because investors have faith in them, and as more investors access the crypto markets, it is increasingly important to dedicate more resources to protecting them," says SEC Chair Gary Gensler. This dedicated unit has successfully brought dozens of cases against those seeking to take advantage of investors in crypto markets, he says. ... "This is great news! A lot of the cryptocurrency market is against any regulations, including those that would safeguard their own value, but that's not the vast majority of the rest of the world. The cryptocurrency world is full of outright scams, criminals and ne'er-do-well-ers," says Roger Grimes, data-driven defense evangelist at cybersecurity firm KnowBe4. Grimes adds that even legal and very sophisticated financiers and investors are taking advantage of the immaturity of the cryptocurrency market.



Quote for the day:

"The very essence of leadership is that you have to have vision. You can't blow an uncertain trumpet." -- Theodore M. Hesburgh