Daily Tech Digest - February 19, 2023

2023 could be the breakthrough year for quantum computing

Despite progress on short-term applications, 2023 will not see error correction disappear. Far from it, the holy grail of quantum computing will continue to be building a machine capable of fault tolerance. 2023 may create software or hardware breakthroughs that will show how we’re closer than we think, but otherwise, this will continue to be something that is achieved far beyond 2023. Despite it being everything to some quantum companies and investors, the future corporate users of quantum computing will largely see it as too far off the time horizon to care much. The exception will be government and anyone else with a significant, long-term interest in cryptography. Despite those long time horizons, 2023 will define clearer blueprints and timelines for building fault-tolerant quantum computers for the future. Indeed, there is also an outside chance that next year will be the year when quantum rules out the possibility of short-term applications for good, and doubles down on the 7- to 10-year journey towards large-scale fault-tolerant systems.


Technical Debt is a Major Threat to Innovation

The challenge is instead of trying to keep the proverbial IT lights on during the COVID-19 era, IT teams are now being asked to innovate to advance digital business transformation initiatives, said Orlandini. A full 87% of survey respondents cited modernizing critical applications as a key success driver. As a result, many organizations are embracing platform engineering to bring more structure to their DevOps processes, he noted. The challenge, however, is striking a balance between a more centralized approach to DevOps and maintaining the ability of developers to innovate, said Orlandini. The issue, of course, is that in addition to massive technical debt, the latest generation of applications are more distributed than ever. The survey found 91% of respondents now rely on multiple public cloud providers for different workloads, with 54% of data residing on a public cloud. However, the survey also found on-premises IT environments are still relevant, with 20% planning to repatriate select public cloud workloads to an on-premises model over the next 12 months.


What’s Going Into NIST’s New Digital Identity Guidelines?

Both government and private industries have been collecting and using facial images for years. However, critics of facial recognition technology accuse it of racial, ethnic, gender and age-based biases, as it struggles to properly identify people of color and women. The algorithms in facial recognition tend to perpetuate discrimination in a technology meant to add security rather than adding risk. The updated NIST digital guidelines will directly address the struggles of facial recognition in particular, and biometrics overall. “The forthcoming draft will include biometric performance requirements designed to make sure there aren’t major discrepancies in the tech’s effectiveness across different demographic groups,” FCW reported. Rather than depend on digital photos for proof, NIST will add more options to prove identity. Lowering risk is as important to private industries as it is to federal agencies. Therefore, it would behoove enterprises to take steps to rethink their identity proofing.


The Past and Present of Serverless

As a new computing paradigm in the cloud era, Serverless architecture is a naturally distributed architecture. Its working principle is slightly changed compared with traditional architectures. In the traditional architecture, developers need to purchase virtual machine services, initialize the running environment, and install the required software (such as database software and server software). After preparing the environment, they need to upload the developed business code and start the application. Then, users can access the target application through network requests. However, if the number of application requests is too large or too small, developers or O&M personnel need to scale the relevant resources according to the actual number of requests and add corresponding policies to the load balance and reverse proxy module to ensure the scaling operation takes effect timely. At the same time, when doing these operations, it is necessary to ensure online users will not be affected. Under the Serverless architecture, the entire application release process and the working principle will change to some extent.


Why Apache Beam is the next big thing in big data processing

It’s a programming model for writing big data processing pipelines which is portable and unified. Now what does it mean exactly: First let’s understand the use cases for big data processing pipelines. Batch processing: Batch processing is a data processing technique used in big data pipelines to analyze and process large volumes of data in batches or sets. In batch processing, data is collected over a period of time, and then the entire batch of data is processed together Stream processing : Processing data as it is generated. It is a data processing technique to process data in real-time as it is generated, rather than in batches. In stream processing, data is processed continuously, as it flows through the pipeline. ... Beam offers multi-language pipelines which is basically a pipeline that is constructed using one Beam SDK language and incorporates one or more transforms from another Beam SDK language. The transforms from the other SDK language are known as cross-language transforms. 


The Use of ChatGPT in the Cyber Security Industry

ChatGPT has also been useful within cybernetic defense, by being asked to create a Web Application Firewall (WAF) rule to detect a specific type of attack, in the threat hunting scenario, where it is possible that the tool creates a machine learning model in any language, such as python, so that the tool can analyze the network traffic of a .pcap file, where the network packets were captured and thereby identify possible malicious behavior, such as a network connection with a malicious IP address that is already known and may indicate that a device is compromised, indicate an unusual increase in attempts to access the network through brute force, among other possibilities. ... This is worrying to the point of schools in NYC City blocking access to ChatGPT due to concern about the negative impacts this can generate on the students’ learning process, since in most cases, depending on the question, the answer is already provided without any effort or without having to study.


Is quantum machine learning ready for primetime?

Hopkins disagrees. “We are trying to apply [quantum ML] already,” he says, joining up with multiple clients to explore practical applications for such methods on a timescale of years and not decades, as some have ventured. ...  “You’re not going to fit that on a quantum computer with only 433 qubits,” says Hopkins – sufficient progress is being made each year to expand the possible number of quantum ML experiments that could be run. He also predicts that we will see quantum ML models become more generalisable. Schuld, too, is hopeful that the quantum ML field will directly benefit from recent and forthcoming advances on the hardware side. It’ll be at this point, she predicts, when researchers can begin testing quantum ML models on realistic problem sizes, and when we’re likely to see what she describes as a ‘smoking gun’ revealing a set of overarching principles in general quantum ML – one that reveals just how much we do and don’t know about the mysteries of applying these algorithms to complex, real-world problems.


Cyber Resilience Act: A step towards safe and secure digital products in Europe

Cybersecurity threats are global and continually evolving. They are targeting complex, interdependent systems that are hard to secure as threats can come from many places. A product that had strong security yesterday can have weak security tomorrow as new vulnerabilities and attack tactics are discovered. Even with a manufacturer appropriately mitigating risks, a product can still be compromised through supply chain attacks, the underlying digital infrastructure, an employee or many other ways. Microsoft alone analyzes 43 trillion security signals daily to better understand and protect against cyberthreats. Staying one step ahead requires speed and agility. Moreover, addressing digital threats requires a skilled cybersecurity workforce that helps organizations prepare and helps authorities ensure adequate enforcement. However, in Europe and across the world there is a shortage of skilled staff. Over 70% of businesses cannot find staff with the required digital skills. 


Microservices Architecture for Enterprise Large-Scaled Application

Microservices architecture is a good choice for complex, large-scale applications that require a high level of scalability, availability, and agility. It can also be a good fit for organizations that need to integrate with multiple third-party services or systems. However, microservices architecture is not a one-size-fits-all solution, and it may not be the best choice for all applications. It requires additional effort in terms of designing, implementing, and maintaining the services, as well as managing the communication between them. Additionally, the overhead of coordinating between services can result in increased latency and decreased performance, so it may not be the best choice for applications that require high performance or low latency. ... Microservices architecture is a good choice for organizations that require high scalability, availability, and agility, and are willing to invest in the additional effort required to design, implement, and maintain a microservices-based application.


Developing a successful cyber resilience framework

The difference between cyber security and cyber resilience is key. Cyber security focuses on protecting an organization from cyber attack. It involves things such as firewalls, VPNs, anti-malware software, and hygiene, such as patching software and firmware, and training employees about secure behavior. On the other hand, “cyber resilience focuses on what happens when cyber security measures fail, as well as when systems are disrupted by things such as human error, power outages, and weather,”. Resiliency takes into account where an organization's operations are reliant on technology, where critical data is stored, and how those areas can be affected by disruption. ... Cyber resilience includes preparation for business continuity and involves not just cyber attacks or data breaches, but other adverse conditions and challenges as well. For example, if the workforce is working remotely due to a catastrophic scenario, like the COVID-19 pandemic, but still able to perform business operations well and produce results in a cyber-secure habitat, the company is demonstrating cyber resilience.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - February 18, 2023

Oracle outages serve as warning for companies relying on cloud technology

“Oracle engineers identified a performance issue within the back-end infrastructure supporting the OCI Public DNS API, which prevented some incoming service requests from being processed as expected during the impact window,” the company said on its cloud infrastructure website. In an update, the company said it implemented "an adaptive mitigation approach using real-time backend optimizations and fine-tuning of DNS Load Management to handle current requests." Oracle said that the outage caused a variety of problems for customers. OCI customers using OCI Vault, API Gateway, Oracle Digital Assistant, and OCI Search with OpenSearch, for example, may have received 5xx-type error or failures (which are associated with server problems), Oracle said. Identity customers may have experienced issues when creating and modifying new domains. In addition, Oracle Management Cloud customers may have been unable to create new instances or delete existing instances, Oracle said. Oracle Analytics Cloud, Oracle Integration Cloud, Oracle Visual Builder Studio, and Oracle Content Management customers may have encountered failures when creating new instances.


EU parliamentary committee says 'no' to EU-US data privacy framework

In particular, the committee noted, the executive order is too vague, and leaves US courts — who would be the sole interpreters of the policy — wiggle room to approve the bulk collection of data for signals intelligence, and doesn’t apply to data accessed under US laws like the Cloud Act and the Patriot Act. The parliamentary committee's major points echoed those of many critics of the deal in the EU, as well as the criticsm of the American Civil Liberties Union (ACLU), which has said that the US has failed to enact meaningful surveillance reform. ... In short, the committee said that US domestic law is simply incompatible with the GDPR framework, and that no agreement should be reached until those laws are more in alignment. The committee’s negative response this week to the proposed data privacy framework, however, was a nonbinding draft resolution and though it is a sticking point, does not put a formal halt to the adoption process, as its approval was not required to move the agreement along.


How edge devices and infrastructure will shape the metaverse experience

Cloud-native edge infrastructure can address these shortcomings and provide optimized service chaining. It can handle a tremendous amount of data processing while delivering cost-effective, terabit-scale performance and reduced power consumption. In doing so, edge computing can move past closed networking models to meet the demanding data processing requirements of the metaverse. “Edge computing allows data to be processed at or near the data source, implying that commands and processes will occur promptly. As the metaverse will require massive data simultaneously, processing data quickly and seamlessly depends on proximity,” Prasad Joshi, SVP and head of emerging technology solutions at Infosys, told VentureBeat. “Edge computing offers the ability to process such information on a headset or on the device, thereby making that immersive experience much more effective.” ... The power, space and cooling limitations of legacy architecture further exacerbate this data surge. While these challenges impact consumer-based metaverse applications, the stakes are much higher for enterprise use cases.


The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter

It’s not a Skynet-level supercomputer that can manipulate the real world. ... Those feats are impressive. But combined with what appears to be an unstable personality, a capacity to threaten individuals, and an ability to brush off the safety features Microsoft has attempted to constrain it with, that power could also be incredibly dangerous. Von Hagen says he hopes that his experience being threatened by Bing makes the world wake up to the risk of artificial intelligence systems that are powerful but not benevolent—and forces more attention on the urgent task of “aligning” AI to human values. “I’m scared in the long term,” he says. “I think when we get to the stage where AI could potentially harm me, I think not only I have a problem, but humanity has a problem.” Ever since OpenAI’s chatbot ChatGPT displayed the power of recent AI innovations to the general public late last year, Big Tech companies have been rushing to market with AI technologies that, until recently, they had kept behind closed doors as they worked to make them safer.


Machines Are Dreaming Instead of Learning

The question is—how much of the ‘data problem’ is about the quantity versus the quality of data? To deal with this data scarcity or quantity, people are moving away from accessing and using real data towards using synthetic data. In a nutshell, synthetic data is artificially generated data, either mathematically or statistically, which appears close to real-world data. This also increases the amount of data which, in turn, increases the accuracy of each model and removes all the existing flaws in the data. There are many positive reasons to be attracted towards synthetic data such as data privacy. ... One of the reasons that synthetic data is on the rise is to tackle the bias that is present in smaller datasets. Even though larger datasets can have poor quality data—which would require higher fine-tuning and heavier workloads—synthetic data does not represent the quality and the amount of variability that is present within real-world data. Synthetic data is generated using algorithms that model the statistical properties of real data.


Making Microservices Just the Right Size

By attempting to make smaller and simpler services, applications have become more complex. The smaller service size is a great benefit to the individual development team that owns that service, but the complex interconnection between services has made the overall system architecture more involved. We’ve essentially moved the complexity uphill. Rather than individual developers dealing with complexity at the code level, system architects deal with the complexity at the system level. Thus, services that are too large are difficult to build and understand at scale. Services that are too small simply move the complexity up to the system level. The goal, therefore, is to find the right size. It’s like the story of Goldilocks and the Three Bears; finding the right size for your services is challenging, and often involves trial and error. It’s easy to build them too big or too small. Finding the Goldilocks size can be challenging. How do you find the Goldilocks size for your microservices? The answer depends a lot on your organization and your application.


4 Ways To Be A Learning Leader

Constant curiosity makes learning simply part of you and your way of being. If you're motivated and hungry to improve your skills and knowledge, you'll learn more successfully. Professor and researcher Francesca Gino wrote, “When our curiosity is triggered, we think more deeply and rationally about decisions and come up with more-creative solutions.” Additionally, developing and demonstrating a genuine interest in people and their perspectives and interests enriches all your relationships. Start by asking yourself what you're curious about, then think about all the topics that extend from that. If this still feels hard, set an intention to ask one other-oriented question per meeting or interaction. We all consume and digest information and learning differently. Think about how you prefer to learn in given contexts. For example, do you like to just go for it? Do you like talking to other leaders, coaches or mentors? Maybe you like podcasts or reading books and articles. Discover what works best for your learning.


Malware authors leverage more attack techniques that enable lateral movement

"An increase in the prevalence of techniques being performed to conduct lateral movement highlights the importance of enhancing threat prevention and detection both at the security perimeter as well as inside networks," researchers from cybersecurity firm Picus, said in their report. Many years ago lateral movement used to be associated primarily with advanced persistent threats (APTs). These sophisticated groups of attackers are often associated with intelligence agencies and governments, whose primary goals are cyberespionage or sabotage. To achieve these goals these groups typically take a long time to understand the network environments they infiltrate, establish deep persistence by installing implants on multiple systems, they identify critical servers and sensitive data stores and try to extract credentials that gives them extensive access and privilege escalation. APTs also used to operate in a targeted manner, going to specific companies from specific industries that might have the secrets their handlers are looking for.


The cost and sustainability of generative AI

More demand for AI means more demand for the resources these AI systems use, such as public clouds and the services they provide. This demand will most likely be met with more data centers housing power-hungry servers and networking equipment. Public cloud providers are like any other utility resource provider and will increase prices as demand rises, much like we see household power bills go up seasonally (also based on demand). As a result, we normally curtail usage, running the air conditioning at 74 degrees rather than 68 in the summer. However, higher cloud computing costs may not have the same effect on enterprises. Businesses may find that these AI systems are not optional and are needed to drive certain critical business processes. In many cases, they may try to save money within the business, perhaps by reducing the number of employees in order to offset the cost of AI systems. It’s no secret that generative AI systems will displace many information workers soon.


6 quantum computing questions IT needs to ask

The challenge is the older systems' data format and fields may not be compatible with newer systems. In addition, the fields and tables might not contain what you'd expect. There is also the complexity of free text fields that store keywords. Do not underestimate the challenge of making existing data available for quantum application to work with. ... The important question in developing quantum applications is finding tools that can provide a 10-year lifespan with guaranteed software support. There are many open source tools for quantum-based application development. A company could take on one (or more) open source projects, but this can be a challenge and a costly commitment. The issue is not only keeping your software up to date (and retaining staff to develop it) but also to develop quantum software that's compatible with the rest of your IT environment. When considering lifespan, consider abandoned open source projects for quantum software applications.



Quote for the day:

"Leadership is an opportunity to serve. It is not a trumpet call to self-importance." -- J. Donald Walters

Daily Tech Digest - February 17, 2023

Bard, Bing, and the 90% problem

With search in particular, accuracy and thoroughness matter. One simple answer is fine — when it’s right. And when you can trust that it’s right. But it certainly seems like right now, that’s anything but the case with any of this technology. Hell, Microsoft's Bing-bot includes prominent disclaimers that it’s likely to provide inaccurate or incomplete information! And all novelty and cool factor aside, I just don’t see how that’ll make for an especially useful utility from a search context, for as long as that remains the case. ... It's really quite simple: If even one out of every 10 attempts at using something produces a flawed or for any reason unsatisfactory result, folks tend to lose faith in said thing pretty fast. And they then end up turning to another tool for the same purpose more often than not. That's why lots of us rely on Assistant for functional commands, which work fairly consistently — but when it comes to more complex searches, whether we've got Assistant at our beck and call on a phone or built into the core system interface on a Chromebook, we're still more likely to go to Google to get an answer.


EaaS as a Technique to Raise Productivity in Teams

EaaS can help you provide your application in a staging environment. Essentially, this environment is a copy of your production environment. EaaS tools simply assist you with duplicating the production environment and all of its elements (e.g., the codes, settings, and deployment configurations). These technologies enable you to quickly create these environments for your clients, providing them with a trial version of your software. Consequently, even before the application is finished, you may present your products to clients more quickly. EaaS also allows developers to be more creative by constructing settings similar to sandboxes in which they can experiment with new ideas without having to set up new setups or recreate current ones. The EaaS approach is scalable and cost-effective. Only the resources you use and the time your server is online are subject to payment. So, if you need to submit a proof of concept to a stakeholder, you just need to pay for the time the environment will be operational.


Fraudsters are using machine learning to help write scam emails in different languages

Scammers don't even need to speak the language of the people or organizations they're targeting: analysis of some prolific BEC campaigns by researchers at Abnormal Security suggests that email fraudsters are turning to machine learning-powered translation tools like Google Translate to help compose emails used in the attacks. This technique is enabling widespread BEC campaigns for an expanded array of cyber-criminal groups, who can cast a larger net at minimal cost. "Attacking targets across various regions and using multiple languages is nothing new. However, in the past, these attacks were perpetrated mainly by sophisticated organizations with bigger budgets and more advanced resources," said Crane Hassold, director of threat intelligence at Abnormal Security. ... The payment fraud campaigns have been distributed in at least 13 different languages, including Danish, Dutch, Estonian, French, German, Hungarian, Italian, Norwegian, Polish, Portuguese, Spanish, and Swedish.


Don’t Let a Cyberattack Destroy Your Pharmacy

One mistake that many independent pharmacies make is to use free Gmail addresses to transmit sensitive data, Mr. Gallagher added. The email service is not encrypted or secure, he stressed, which is why a better option is to use a private domain for company email. Similarly, he added, it’s important to choose HIPAA-compliant videoconferencing software, such as Microsoft Teams, for discussions with patients and internal meetings. Sloppy data disposal practices are another concern. “What we’ve learned from previous breaches that have happened at pharmacies is that whether it’s paper or whether it’s electronic, it’s really a good idea to ensure that the information is responsibly and securely disposed of,” said Lee Kim, JD, the senior principal of cybersecurity and privacy at the Healthcare Information and Management Systems Society, who wasn’t a presenter at NASP. “How many of us actually think, ‘Well, maybe I should ensure that everything is wiped from the photocopier before it gets serviced’? Probably not many, but if you don’t think about the small transactional things like that … people’s information is at risk.”


States sketch out roadmaps for zero trust ‘journey’

“Money doesn't solve every problem, and endless amounts of money would not instantly create a perfect world where every state has zero trust fully implemented in a very mature way,” Pugh said. “But it would help those states that are very budget strapped and have many competing priorities.” One way of assessing how far along states are in implementing zero trust is whether it is “top of mind in security conversations,” said Jim Richberg, public sector field CISO and vice president of information security at Fortinet. And by that measure, state leaders are paying attention. Those that have led the way on state-level zero trust said guidance already exists from the likes of the National Institute of Standards and Technology’s Authenticator Assurance Levels and Identity Assurance Levels. With those guidelines in place, said Adam Ford, Illinois’ chief information security officer during a National Governors’ Association webinar, states can establish a baseline for themselves, even though the system nationwide is set up so we are "50 experiments going on at the same time," he said.


Don't put off data minimization

From a risk-based perspective, the biggest exposure is in relation to cyberattack. This is a particular threat for law firms because cybercriminals now include you on a shortlist of prime targets. The ABA’s cybersecurity report in 2021 observed that ransomware, in particular, is: “an increasing threat to lawyers and law firms of all sizes”. Microsoft revealed that state-sponsored Chinese hackers have been targeting “US-based universities, defense contractors, law firms and infectious disease researchers”. A lack of systematic data minimisation increases your attractiveness to such criminals because you present a larger, juicier target. Moreover, cyberattack can be your biggest nightmare. It incurs lost productivity and may entail ransom demands. You’ll likely need to pay cybercrime expert fees, and potentially regulatory and professional fines. But that’s not all. A New York based entertainment law firm suffered an attack in 2020 when hackers demanded a ransom payment of USD$42 million to prevent the release of confidential information about the firm’s world-famous clients. News outlets subsequently reported that the firm eventually paid out USD$365k. And there’s the rub. 


CIO role: 4 ways to do more with less

Even the best CIOs can fall victim to a common efficiency-robbing habit: getting lost in the weeds on a particular project. As CIO, you have a lot on your plate, and it’s easy to miss deadlines or deliver sub-par performance if you get too focused on details your team can – and should – handle. Assuming you have a competent, trustworthy team, let go of more minor details and remain laser-focused on your organization’s desired strategic outcomes. When CIOs feel compelled to control every detail, it can indicate a struggling organization. If a business’ IT arm is bogged down by legacy systems or an outpouring of manual and rote tasks that do nothing for business performance, the CIO will often be mired in dealing with organizational performance issues. That means more time managing internal fire drills and less time thinking strategically and making business-critical decisions. ... When you have the confidence and infrastructure to delegate details to your team, you’ll have much more bandwidth to focus on the big picture and drive your business forward.


Navigating the ever-changing landscape of digital security solutions

We see an increasingly fragmented geopolitical landscape with unique data residency requirements for each country which is resulting in localized hosting of solutions as well as nimbleness and increased granularity of data control. Regulations like GDPR and CCPA necessitate the need for not only safeguarding information (via encryption and tokenization) but also driving automated protection of PII. Recent regulations from the White House and guidance from CISA are aimed at driving better compliance with incident disclosure as well as offering a blueprint for zero trust. ... Most progressive organizations view cybersecurity as business critical and partner with organizations like ours to create a comprehensive cybersecurity strategy. In short, while there is increased oversight, both the consumers and providers of security solutions are more focused on: implementing a zero-trust approach, instituting automated protection of information and taking a partnership posture as opposed to a traditional vendor-buyer approach.


Cybersecurity Jobs Remain Secure Despite Recession Fears

"With reports of job cuts at organizations including Twitter, Meta, Microsoft, Amazon and Google, cybersecurity staff could benefit from proactive hiring targeted towards those recent layoffs," the report stated. "With so many tech jobs impacted by recent layoffs, it is possible that many of those individuals may find opportunity in pursuing a career in cybersecurity, where they can apply related skills and expertise." The resilience in demand for cybersecurity professionals comes as many workers burned out and resigned, part of the Great Resignation in 2022. Organizations that lost valuable specialists did so for three main reasons, Rosso says. Cybersecurity teams have traditionally not had great career advancement opportunities, so their ability to gain promotions and increased salaries at their current company are often limited. In addition, the culture surrounding many security teams has often led to burnout and mental stress, she says. "We know, for example, that at the end of 2021 and beginning of 2022, the Log4j issue was causing people to clock a lot of hours, and that led to some burnout," she says. 


Why Your Organization Needs to Embrace Data Resiliency

Enterprises should take a holistic approach to understanding their data: how it's gathered, how it's used throughout the organization, and how it's impacted by a lack of availability or corruption, Krishnamoorthy says. “This starts with creating a detailed map of business processes, applications, systems, and data,” he suggests. Schick notes that there's no industry-standard checklist for ensuring data resiliency, but advises separating critical and non-critical data, storing data in separate locations, logging transactions that change critical data, and using tools and processes to quickly recover corrupted or lost data. Enterprises should retain data only for as long as it's needed, O'Hern suggests. “We eliminate risk when we purge … which means it no longer exists to be held hostage.” Krishnamoorthy notes that it's also important to understand how applications, automated tools and systems, and IT staff interact with enterprise data from manageability, serviceability, and security perspectives. 



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - February 16, 2023

Eyes on Data: The Elevated Role of Data Management and the CDO

As data becomes more prevalent for every single employee of every organization, it is imperative that organizations go beyond data governance to develop a strong data-driven culture. The importance of a data-driven culture was identified as a key factor in overall success. Data culture starts at the top. Senior executives must establish a data mindset across the firm, emphasizing the importance of a sound data management discipline. Getting the most out of an organization’s data means investing in the programs that support it and the people who are tasked with using it to ensure strong data awareness and literacy. Without a focus on data literacy, organizations are at risk of coming up short in achieving their objectives. ... Today’s data management professionals are assuming more and more responsibility for the public’s data. It is critical, therefore, that firms take responsibility for the ethical access and use of this data and do everything they can to avoid unintentional outcomes due to poor data quality, lack of data analytic model governance, or hidden data biases. 


What are the biggest challenges organizations face in their digital transformation efforts?

The leadership should give a big safety net to everyone by saying ‘Hey, we are going on this journey, we are going to learn a lot and, if you fail, if you have issues, that’s okay. We’ll cover you, we are on your side, let’s just go through this learning journey first.’ So give that safety net for everyone. At the same time, provide some kind of framework for learning. You can’t just say to a whole organization we are going to be transforming ourselves and we are going to be DevOps-enabled and just leave it at that. You should have a program, and some kind of learning mechanism, and probably some outside training if that is needed. You should have days set aside, maybe even give employees 80% of the time to do normal work but 20% to learn something new. This framework of learning and enabling is really important for people to upskill themselves. Think in a different way and basically be happy about the journey that they are on because once people are motivated and happy, then a bunch of stuff starts happening.


Operational Resilience: More than Disaster Recovery

The broader focus of operational resilience requires organisation-wide participation. You cannot simply leave it to a single department or team. Instead, everyone needs to be involved, from executives and the board of directors to individual employees in multiple departments. In today’s climate, it’s not just your own organisation that’s under threat. Your suppliers, partners, and vendors are targets, too. If a major supplier is compromised or taken down, your business might go down with them. Leadership needs to understand risk and to know the risk tolerance and risk appetite of the company. That even includes things such as procurement functions and agreements with third-party suppliers. Resilience must be built into everything down to every-day workflows, and if a single supplier is insufficient to manage risk, then diversity of supply is a must. There are many cases where a cyber event at a supplier rendered multiple organisations unable to fulfil their business outcomes. ... In the context of operational resilience, every risk management scenario and process must consider the supply chain.


Helping users and organizations build an instinctive data privacy habit

Although there’s no one-size-fits all measurement metric for engagement, there are a few guidelines we typically use. The first is tracking the level of engagement and participation from our stakeholders, including consumers, businesses, and other non-profit organizations we partner with. We look at the number of events and activities hosted (e.g., webinars, panel discussions, etc.) and how well attended those are, as well as media/social media mentions as a bellwether for gauge reach and impact of the campaign. Additionally, we also measure success by the extent to which we’ve achieved the objectives and goals we set at the beginning of each campaign. It’s an abstract metric, but for Data Privacy Week, our mission is to raise awareness and educate individuals and organizations about the importance of protecting personal information and championing transparency by companies relative to how they collect, store, and use consumer data. 


6 things ChatGPT can't do (and another 20 it refuses to do)

It won't write about anything after 2021 - As an AI language model, I was trained on a large dataset of text that only goes up to 2021. This means that I don't have access to information or events that occurred after that date. While I have access to a lot of historical data and information, I don't have access to current events, news, or updates that have occurred since my training data cutoff date. ... It won't predict future results of sports events or political contests - As an AI language model, I can process and analyze large amounts of data, including historical data on sports events and political contests. However, making accurate predictions for these types of events is not always straightforward. While historical data can be used to train machine learning models to identify patterns and correlations, it cannot account for unforeseeable events, changes in circumstances, or new variables that may impact the outcome. Additionally, humans make decisions and can change their behavior based on many factors, making it difficult to predict the outcome of an event with complete certainty.


8 signs you’ve mistimed a major IT initiative

Indecision and a poor risk appetite are two common reasons why CIOs frequently delay an important IT initiative. “Nimble, forward-looking leaders put execution speed at the center of performance discussions; indecisive IT leaders tend to rely on consensus decisions and endless risk evaluations,” says Colm Sparks-Austin, president and managing director of business advisory firm Capgemini Canada. To correctly time a major IT initiative, the decision-maker should align the initiative with business goals. “If the business isn’t spearheading the initiative, or is not aware of it, it’s clear that something is wrong,” Sparks-Austin says. CIOs should also ensure they’re analyzing all IT spend through a business goals lens, Sparks-Austin advises.  ... Unrealistic funding almost always plays an important role in initiative timing, observes Ravi Malick, CIO at cloud-based content management, collaboration, and file-sharing tool provider Box. Overly optimistic funding is almost always a main part of the equation when an initiative fails, he notes. 


How to make progress on managing unstructured data

“As the CIO, your job is to be able to provide the information a business needs in order to make decisions,” Minetola said. “The ability to now see into that 80 per cent of the data and make decisions based off that . . . is significant.” ... When thinking about all the data sources an organisation needs to grapple with as part of its transformation, it makes sense. For instance, consider a bank with thousands of computer systems in over a hundred countries. “You need technologies that close silos,” Evelson said. “Whenever we talk about digital transformation, data and analytics platforms that unify everything that I just talked about, like search-powered technologies, are at the top of everyone’s mind.” ... Search-powered technology should bring two critical capabilities to the table: a visualisation layer and machine learning. Visualisation improves the ability to extract insights from large volumes of data. “It’s one thing to be able to have data,” Minetola said. “It’s another thing to understand it.” Furthermore, machine learning such as natural language processing or vector search can help join data sources to create more relevance and context.


What Ukraine's IT Industry Can Teach CIOs About Resilience

The agile, remote structure refined during the pandemic has served Ukrainian IT companies well as they operate using a hybrid workforce -- some employees live abroad, some are on the move due to Russian attacks, and others serve in the military. Unlike traditional industries, many IT jobs are service-oriented. “​​All you need is a computer, Internet, and electricity. You can literally work from anywhere,” Kavetskyi says. Both companies and individuals have engaged in a sustained process of business continuity planning. Now, most organizations have it down to a science. “They have power generators in their offices and Starlinks,” Kavetskyi claims. He emphasizes the power of knowledge sharing: “The IT clusters helped small and medium-sized companies implement basic continuity plans. Everyone working in this industry had a chance to see what others were doing.” “Of course, there was data that couldn't be shared,” he adds. “But in general, big companies were willing to [share their strategies]. Mainly, we had to find time to organize those meetings, considering the logistical challenges.”


Soft skills: How well-rounded IT pros can push your business forward

With organizational spend under greater scrutiny, it’s critical for every new hire you onboard to add value to the business. Productivity and technical skills are paramount in demonstrating resource value. But when you have two candidates with comparable technical skills, you need to consider the value each person’s soft skills bring to the table. ... Soft skills impact how teams communicate, collaborate, and problem-solve, and these capabilities determine the success of your IT projects and client relationships – and, ultimately, your organizational culture. Company culture also plays a crucial role in your brand reputation: You want clients and job candidates to view your team as pragmatic, business-minded problem solvers and communicators. So as non-technical skills continue to play a critical role in the IT arena, it’s time to reconsider the qualities you search for and foster in employees. Skills tests like coding problems and design scenarios make it relatively easy to gauge an applicant’s technical skills. 


Evolving cyberattacks, alert fatigue creating DFIR burnout, regulatory risk

Magnet Forensics’ respondents generally agreed that addressing the burnout and alert fatigue facing DFIR professionals is hampered by recruiting and hiring challenges as well as onboarding difficulties and a lack of automation. Increased investment in automation would be “highly” or “extremely” valuable for a range of DFIR functions including the remote acquisition of target endpoints and the processing of digital evidence, half of respondents said. However, while automation such as security orchestration, automation, and response (SOAR) is already in place in many SOCs, those solutions orchestrate and automate cybersecurity runbooks by taking telemetry, enforcing actions and using other tools, the report noted. “While important for threat containment and remediation, these runbook-related activities are distinct from those performed by digital forensics automation solutions, which execute a data transformation pipeline by orchestrating, automating, performing, and monitoring forensic workflows,” it added.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson

Daily Tech Digest - February 15, 2023

What is generative AI and why is it so popular?

All it refers to is AI algorithms that generate or create an output, such as text, photo, video, code, data, and 3D renderings, from data they are trained on. The premise of generative AI is to create content, as opposed to other forms of AI, which might be used for other purposes, such as analysing data or helping to control a self-driving car. ... Machine learning refers to the subsection of AI that teaches a system to make a prediction based on data it's trained on. An example of this kind of prediction is when DALL-E is able to create an image based on the prompt you enter by discerning what the prompt actually means. Generative AI is, therefore, a machine-learning framework. ... Generative AI is used in any algorithm/model that utilizes AI to output a brand new attribute. Right now, the most prominent examples are ChatGPT and DALL-E, as well as any of their alternatives. Another example is MusicLM, Google's unreleased AI text-to-music generator. An additional in-development project is Google's Bard.


openIDL: The first insurance Open Governance Network and why the industry needs It

To date, openIDL’s member community includes carrier premiere members: Travelers, The Hartford, The Hanover, and Selective Insurance; state regulator and DOI members; infrastructure partners; associate members; and other non-profit organizations, government agencies, and research/academic institutions. openIDL’s network is built on Hyperledger Fabric, an LF distributed ledger software project. Hyperledger Fabric is intended as a foundation for developing applications or solutions with a modular architecture. The technology allows components, such as consensus and membership services, to be plug-and-play. Its modular and versatile design satisfies a broad range of industry use cases and offers a unique approach to consensus that enables performance at scale while preserving privacy. For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” As funny as this is, what’s not funny is the truth behind the joke, and the insurance industry is certainly one that fell head over heels for the blockchain hype. 


Self-healing endpoints key to consolidating tech stacks, improving cyber-resiliency

Just as enterprises trust silicon-based zero-trust security over quantum computing, the same holds for self-healing embedded in an endpoint’s silicon. Forrester analyzed just how valuable self-healing in silicon is in its report, The Future of Endpoint Management. Forrester’s Andrew Hewitt, the report’s author, says that “self-healing will need to occur at multiple levels: 1) application; 2) operating system; and 3) firmware. Of these, self-healing embedded in the firmware will prove the most essential because it will ensure that all the software running on an endpoint, even agents that conduct self-healing at an OS level, can effectively run without disruption.” Forrester interviewed enterprises with standardized self-healing endpoints that rely on firmware-embedded logic to reconfigure themselves autonomously. Its study found that Absolute’s reliance on firmware-embedded persistence delivers a secured, undeletable digital tether to every PC-based endpoint. Organizations told Forrester that Absolute’s Resilience platform is noteworthy in providing real-time visibility and control of any device, on a network or not, along with detailed asset management data.


How enterprises can use ChatGPT and GPT-3

It is not possible to customize ChatGPT, since the language model on which it is based cannot be accessed. Though its creator company is called OpenAI, ChatGPT is not an open-source software application. However, OpenAI has made the GPT-3 model, as well as other large language models (LLMs) available. LLMs are machine learning applications that can perform a number of natural language processing tasks. “Because the underlying data is specific to the objectives, there is significantly more control over the process, possibly creating better results,” Gartner said. "Although this approach requires significant skills, data curation and funding, the emergence of a market for third-party, fit-for-purpose specialized models may make this option increasingly attractive." ... ChatGPT is based on a smaller text model, with a capacity of around 117 million parameters. GPT-3, which was trained on a massive 45TB of text data, is significantly larger, with a capacity of 175 billion parameters, Muhammad noted. ChatGPT is also not connected to the internet, and it can occasionally produce incorrect answers.


Flaws in industrial wireless IoT solutions can give attackers deep access into OT networks

While many of these flaws are still in the process of responsible disclosure, one that has already been patched impacts Sierra Wireless AirLink routers and is tracked CVE-2022-46649. This is a command injection vulnerability in the IP logging feature of ACEManager, the web-based management interface of the router, and is a variation of another flaw found by researchers from Talos in 2018 and tracked as CVE-2018-4061. It turns out that the filtering put in place by Sierra to address CVE-2018-4061 did not cover all exploit scenarios and researchers from Otorio were able to bypass it. In CVE-2018-4061, attackers could attach additional shell commands to the tcpdump command executed by the ACEManager iplogging.cgi script by using the -z flag. This flag is supported by the command-line tcpdump utility and is used to pass so-called postrotate commands. Sierra fixed it by enforcing a filter that removes any -z flag from the command passed to the iplogging script if it's followed by a space, tab, form feed or vertical tab after it, which would block, for example, "tcpdump -z reboot".


Are Your Development Practices Introducing API Security Risks?

APIs are a prime target for such attacks because cybercriminals can overload the API endpoint with unwanted traffic. Ultimately, the attacker’s goal is to use the API as a blueprint to find internal objects or database structures to exploit. For example, a vulnerable API endpoint backend that connects to a frontend service can expose end users to risk. One researcher even discovered a way to abuse automobiles’ APIs and telematics systems to execute various tasks remotely, such as to lock the vehicle. In the past, bot management technologies, like CAPTCHA, were developed to block bots’ access to web pages that were intended only for human users. However, that approach to security assumes that all automated traffic is malicious. As application environments have matured and multiplied, automation became essential for executing simple functions. Thus, it means organizations cannot rely on simplistic web application firewall rules that block all traffic from automated sources by default. Instead, they need to quickly identify and differentiate good and bad bot traffic.


Zero-shot learning and the foundations of generative AI

One application of few-shot learning techniques is in healthcare, where medical images with their diagnoses can be used to develop a classification model. “Different hospitals may diagnose conditions differently,” says Talby. “With one- or few-shot learning, algorithms can be prompted by the clinician, using no code, to achieve a certain outcome.” But don’t expect fully automated radiological diagnoses too soon. Talby says, “While the ability to automatically extract information is highly valuable, one-, few-, or even zero-shot learning will not replace medical professionals anytime soon.” Pandurang Kamat, CTO at Persistent, shares several other potential applications. “Zero-shot and few-shot learning techniques unlock opportunities in areas such as drug discovery, molecule discovery, zero-day exploits, case deflection for customer-support teams, and others where labeled training data may be hard.” Kamat also warns of current limitations. 


PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023

“Many of the interesting business use cases emerge when you consider that you can further train (fine-tune) generative AI models with your own content, documentation and assets so it can operate on the unique capabilities of your business, in your context. In this way, a business can extend generative AI in the ways they work with their unique IP and knowledge. “This is where security and privacy become important. For a business, the ways you prompt generative AI to generate content should be private for your business. Fortunately, most generative AI platforms have considered this from the start and are designed to enable the security and privacy of prompts, outputs and fine-tuning content. ... “Using generative AI to innovate the audit has amazing possibilities! Sophisticated generative AI has the ability to create responses that take into account certain situations while being written in simple, easy-to-understand language.


What leaders get wrong about responsibility

One way of demonstrating responsibility is through the process of asking and answering questions. Many get at least one part of the process right: by responding to the questions received from their employees, leaders believe that they are showing themselves to be reliable and trustworthy. This isn’t too far off base. The word responsibility, after all, stems from the Latin respons, meaning respond or answer to. Unfortunately, by not asking questions themselves, leaders prevent employees from demonstrating the same kind of reliable and trustworthy behavior—and that makes it harder to embed the locally owned responsibility that they are looking for. ... When leaders use questions to assume responsibility themselves, they think, talk, and behave in a way that puts them at the center of attention (see the left side of the figure above). The questions they ask are quiz or test questions designed to confirm that the respondents see the world in the same way the leader does—e.g., “What are the components of a good marketing campaign?”


OT Network Security Myths Busted in a Pair of Hacks

In one set of findings, a research team from Forescout Technologies was able to bypass safety and functional guardrails in an OT network and move laterally across different network segments at the lowest levels of the network: the controller level (aka Purdue level 1), where PLCs live and run the physical operations of an industrial plant. The researchers used two newly disclosed Schneider Modicon M340 PLC vulnerabilities that they found — a remote code execution (RCE) flaw and an authentication bypass vulnerability — to breach the PLC and take the attack to the next level by pivoting from the PLC to its connected devices in order to manipulate them to perform nefarious physical operations. "We are trying to dispel the notion that you hear among asset owners and other parties that Level 1 devices and Level 1 networks are somehow different from regular Ethernet networks and Windows [machines] and that you cannot move through them in very similar ways," says Jos Wetzels



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley