While technology has enabled more employees to work remotely – bringing considerable benefits in doing so – it has also facilitated digital presenteeism, Qatalog and GitLab concluded. One solution is to make technology less invasive and "more considerate of the user and completely redesigned for the new way of work, rather than supporting old habits in new environments" – although this may be easier said than done. According to Raud, current solutions require a "radical redesign that is more considerate of the user and prioritizes their objectives, rather than simply capturing our attention." Culture shift is also necessary for async work to become normalized, says Rauf. This comes from the top, and starts with trust: "When leaders send a message to their team, make clear whether or not it needs an immediate response or better yet, schedule updates to go out when people are most likely online. If I message a team member at an odd hour, I prefix a 'for tomorrow' or 'no rush', so they know it's not an urgent issue."
Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations. Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI.
AIoT is a growing field with many potential benefits. Businesses that adopt AIoT can improve their efficiency, decision-making, customization, and safety. ... Increased efficiency: By combining AI with IoT, businesses can automate tasks and processes that would otherwise be performed manually. This can free up employees to focus on more important tasks and increase overall productivity. Improved decision-making: By collecting data from various sources and using AI to analyze it, businesses can gain insights they wouldn’t otherwise have. It can help businesses make more informed decisions, from product development to marketing. Greater customization: Businesses can create customized products and services tailored to their customers’ needs and preferences using data collected from IoT devices. This can lead to increased customer satisfaction and loyalty. Reduced costs: Businesses can reduce their labor costs by automating tasks and processes. Additionally, AIoT can help businesses reduce their energy costs by optimizing their use of resources. Increased safety: By monitoring conditions and using AI to identify potential hazards, businesses can take steps to prevent accidents and injuries.
Despite the best laid plans, bear in mind that these are active, interconnected and dynamic systems. It’s impossible to separate physical and cybersecurity elements, as their role in business operations is so foundational. As the landscape for new technologies and best practices change, adapt along with it. Ensure the lines of communication are open, management maintains involvement in the process, and all the key parties across IT and OT are committed to working collaboratively to strengthen every element of security. These tenets will help manufacturing organizations stay nimble in the face of an ever-changing security landscape. As the convergence of IT and OT continues, the risk of cyberthreats will continue to rise along with it. Building a collaborative security team across both IT and OT will help to reduce organizational risk and fortify critical infrastructure. By involving leadership, setting a plan, and staying adaptable as things change, security leaders will be armed with a comprehensive security approach that supports near-term needs and offers long-term business sustainability.
When it comes to mitigating the ever-evolving cyber threat, diversity is a crucial, but often overlooked, factor. As cyber attacks are becoming increasingly culturally nuanced, it is important that we meet the challenge by drawing from a wide range of backgrounds and life experiences. Cyber attacks come from everywhere - from a wide range of ages, locations, and educational backgrounds - so our responders should too. Perceptions of cyber security often see it as revolving around highly complex technology and driven mainly by this. While tech clearly plays a crucial role in mitigating cyber attacks, successfully countering them would not be possible without the role performed by people. This is enriched hugely by having a workforce which covers as many educational and socio-economic backgrounds as possible. In making a concerted effort towards a more diverse workforce, the cyber-security industry will be able to gain a deeper awareness of the cultural nuances that underlie cyber attacks. It’s important to fully understand what we mean by diverse hiring. Considering entry routes into the industry is a big part of attracting a broader range of demographics.
We have more data than ever before, but it is not enough to merely accumulate it. Dedicate time and resources to establishing digital governance to ensure the data you are using is clean, consistently implemented, and universally understood. ... The tech team is not solely responsible for the quality of our data—we all need to take ownership of and champion the data we use. Visualization tools bridge the gap between the tech team and the business team, doing away with barriers to entry and enabling end-to-end analytics. In this way, you can empower employees to immerse themselves in and take ownership of the data at hand. Users no longer have to submit a request to the tech team to create a report and twiddle their thumbs until it comes back. They can now take initiative and do it themselves, creating a more streamlined process and a more informed group of employees who can work quickly to make data-driven decisions. Furthermore, when you empower people to take control of their data and ask their own questions, they may uncover new insights they would never have found when presented with pre-packaged reports.
From Cornell's perspective, DevOps — or hopefully, DevSecOps groups — should really spearhead the management of software supply chain risk. "They are the ones who own the software development process, and they see the code that is written," he says. "They see the components that are pulled in. They watch the software get built. And they make it available to whoever is next on down the line." Given this vantage point, they can help to impact — in a positive way — an organization's software supply chain security status by implementing good policies and practices around what open source code is included in their software and when those open source components are upgraded. "Forward-leaning DevSecOps teams can take advantage of their automation and testing to start pushing for more aggressive component-upgrade life cycles and other approaches that help minimize technical debt," he explains. He says they’re also in a position and own the tooling to help generate SBOMs that they can then provide to software consumers who are in turn looking to manage their supply chain risk.
Identifying risks and documenting response actions are only part of the equation. Crucial to the overall C-SCRM process is the communication and education of all parties involved about organizational risks and how to respond. Organizations must ensure that all personnel and third-party partners are trained on supply chain risks, encourage awareness from the top down, and involve partners and suppliers in organization-wide tests and assessments of response plans. Organizations should establish open communications with their supplier partners about risk concerns and encourage partners to do the same in return. The general idea is individual strength through community strength. As an organization matures its C-SCRM (or overall cybersecurity) process, lessons learned and best practices should be shared along the way to help bolster others’ programs. The concept of C-SCRM is not a new one. In fact, there are many sources that have provided guidance on the topic over the years. The National Institute of Standards and Technology (NIST) has a Special Publication (SP) 800-161 and an Internal Report (IR) 8276 on the subject.
The good news is that as business leaders trust their data, they’ll use it more for decision-making, analysis, and prediction. With that comes an expectation that the data, network, and systems for accessing key data sources are available and reliable. Ian Funnell, manager of developer relations at Matillion, says, “The key data quality metric for dataops teams to prioritize is availability. Data quality starts at the source because it’s the source data that run today’s business operations.” Funnell suggests that dataops must also show they can drive data and systems improvements. He says, “Dataops is concerned with the automation of the data processing life cycle that powers data integration and, when used properly, allows quick and reliable data processing changes.” Barr Moses, CEO and cofounder of Monte Carlo Data, shares a similar perspective. “After speaking with hundreds of data teams over the years about how they measure the impact of data quality or lack thereof, I found that two key metrics—time to detection and time to resolution for data downtime—offer a good start.”
The NFT space has ongoing issues with fraud, including through bad actors wholesale lifting art from one project and using it in a second project — a process often referred to as “copyminting.” They are derivative projects that have a few too many similarities to the original project to be considered anything other than a ripoff. While most of these duplicate projects do very little sales volume relative to the original, they may damage the underlying brand, contribute to the overall distrust of the NFT space, or trick less savvy buyers into spending money on something that’s the jpg equivalent of a street vendor shilling fake Rolex watches. To help combat this fraud, a few companies are emerging that specialize in fraud detection in NFTs. They tend to leverage blockchain data to help determine which project came first and apply some image detection to find metadata matches. One of these solutions is Optic, which uses artificial intelligence and machine learning to analyze the images associated with an NFT, which helps NFT marketplaces and minting platforms catch copies and protect both creators and buyers.
Quote for the day:
"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey