Daily Tech Digest - October 17, 2023

Beware the cost traps that can strain precious cybersecurity budgets

Overlapping services that duplicate functions are another common overspend that can eat into security budgets. "Paying for these duplicate security functions can be financially inefficient and strain the budget," says Nick Trueman, CISO at cloud services provider Nasstar. It can also result in integration challenges whereby coordinating and integrating multiple providers with similar functions leads to complexities and interoperability issues, he adds. CISOs should conduct a comprehensive review and identify all current security providers and the services they offer. ... On the topic of redundancies, CISOs can often end up paying for tools that do not deliver the expected benefits, significantly impacting their security budgets and coverage plans. CISOs may encounter scenarios where they invest in security tools or technologies that, despite their initial promise, fail to provide the anticipated value or return on investment (ROI), says Paul Baird, chief technical security officer at Qualys. This could happen for several reasons, including inadequate integration with existing systems, limited user adoption, or the tools not effectively addressing the organization's specific security needs.


Essential cyber hygiene: Making cyber defense cost effective

When it comes to dollars and cents, the industry as a whole has made many attempts to calculate the cost of a cyber attack. The same can’t be said about estimating the costs of implementing cyber defenses. But there’s value in knowing both of those metrics. Knowing what an enterprise can spend to prevent an attack is helpful when you know what they’re willing to spend to recover from an attack. For example, if the cost of recovering from a cyber attack is $1.25 million but an enterprise can spend only $1 million on implementing a set of robust cyber defenses, which one should they choose? To estimate the cost of IG1 Safeguards, we looked at the tools that an enterprise needs to implement them. Tools are priced in many ways, the most common being the following: by number of employees, users, workstations/servers, and/or by usage (e.g., megabyte, gigabyte, hours). CIS created IG1 Enterprise Profiles to help streamline the process of calculating costs. Our estimate shows that obtaining and deploying commercially-supported versions of the tools should be less than 20% of the Information Technology (IT) budget for any size enterprise.


Why the human factor is critical to ITOps success

Communication between developer teams and “business types” can be fraught. Developers typically work very hard for long hours to deliver what customers want. Yet efforts frequently fall flat, due in no small part to a failure of one or both sides to understand or explain what the other really wants or needs, says Shafrir. “There will always be a wall between the two, but especially during this time with tonnes of services on the internet and daily changes to software. It’s a problem if only business people are in touch with customers,” he says. Developers often have little idea how the customers are using the product – not least because they write code and “throw it over the fence”. “Then it’s frustrating when we’re [developers] told our quality is very low and it’s not a good job and we don’t work hard enough and other things,” says Shafrir. Shafrir recommends that IT leaders “take down that wall” between the two teams. If developers are notified and know exactly – continuously – how the code is performing in the customer environment, fixes can be rolled out faster. 


Security Governance and Risk Management in Enterprise Architecture

Security governance isn't just a rulebook. It's a structured approach that champions data protection, system reliability, and seamless business operations. With this governance in place, the intricate realm of cybersecurity becomes a navigable terrain. True security roots itself deep within organizational culture. When every team member, from the top brass to the newest recruit, values security, the organization stands united and fortified. A collective commitment to security amplifies the organization's resilience. ... Frameworks, especially ones like the NIST Risk Management Framework, offer more than theoretical value: they shape practical decisions in technology, placing risk considerations at the forefront. Adopting such guiding principles ensures that architectural choices resonate with both innovation and security. Still, the landscape of risk is dynamic, changing with every technological advancement and emerging threat. Regular, thorough risk assessments become a beacon that illuminates potential security gaps. Allocating resources to these evaluations ensures a resilient and adaptive enterprise architecture, always prepared for the challenges ahead.


Why A One-Size-Fits-All 'Compliance' Plan Can Be Dangerous

IT departments these days use many different architectures with various hardware, software and network configurations. Because of these differences, it's difficult to create a single cybersecurity formula that works for all companies. Some of the pitfalls of trying to cut corners and save costs by implementing a generic plan include: Lack Of Customization - A one-size-fits-all approach doesn't consider the specific problems and needs of each company. What works for one organization may not be enough to address the weaknesses and particular requirements of the next. It's important to customize security measures to fit the unique characteristics of each company to effectively protect against cyber threats. Increased Risk Of Breaches - When companies use a standardized compliance plan, it sets a basic level of security. However, this plan might not take into account the specific risks and security gaps that exist in each organization. Without customized security measures, a greater chance exists of experiencing data breaches or cyberattacks.


Generative AI is everything, everywhere, all at once

Unlike generative AI, which exploded within the past year thanks in part to OpenAI's consumer-facing ChatGPT, AI is nothing new. And it's a fairly ambiguous term, Toubia explained. "There's a wide range of things you could label as AI or machine learning," he said. "There's some very simple statistical methods that have been around for over 100 years that technically could be as clever as AI." Given the enigmatic nature of generative AI, it's also a complicated product to patent, audit, or regulate, which further exacerbates AI washing. "Companies don't really have to publish or explain their AI because it's a trade secret. There's no pattern that you could read, and we don't really know what's under the hood, so to speak," Toubia said. Regulatory institutions like the FTC are certainly trying to control the unwieldy industry with industry-wide warnings and reports. While he appreciates the ideas behind the warnings, Thurai is doubtful that the FTC's stern warnings and oversights will be enforced due to how difficult it will be to prove in court.


The Whats and Hows of DevOps Talent Retention

Given that a lack of opportunities for growth accounts for close to half of DevOps employees’ turnover rates, it once again highlights the importance of a well-planned approach to support ongoing learning. By understanding employee employees’ performance, preferences and environment, a wider range of support offers can be implemented. Providing tailored assignments that allow employees to focus on their skills or passion not only helps build commitment to the job but it also acts as a motivator. This can stem from a simple 15-minute project or a year-long program. But advocating personal development courses and upskilling techniques are not enough. Employers must also harness peer recognition. A sales organization within the United States Postal Service (USPS) recently made an attempt to boost peer recognition by enabling their employees to identify behavior associated with new skills learned by setting up a simple online platform. The group oversaw an overall employee engagement rise by 8% in the initial pilot group. Such strategies were then used to improve work across the organization.


How to Partner with Law Enforcement Following a Cyberattack

Law enforcement will come with the intention of acting as partner to the victim organization, alongside other stakeholders like remediation firms and insurance companies. “We would really expect to be seen as true partners in every sense of the word,” says Alway. A law enforcement team could include investigative agents with cybersecurity backgrounds, as well as technical experts, such as computer scientists and data analysts. That partnership will be based on information sharing. Organizations will tell law enforcement about the nature of the incident, provide logs, and any other evidence of the intrusion and answer questions. Law enforcement will share their knowledge of IOCs and any information they have that can help enterprises during the remediation process. “There’s no such thing as over communication in cyber incidents,” says Alway. It is important to keep in mind that law enforcement’s job takes time. “A lot of times the investigation piece could drag on for multiple years, whereas the company [or] organization is on a shorter timeline,” says Cabrera.


Cyber security professionals say industry is “booming”

Cyber security professionals are still positive about the industry and their opportunities despite the economic climate, according to The Chartered Institute of Information Security's (CIISec) 2022/2023 State of the Profession report – the eighth annual survey of the cyber security industry. In the survey of 302 security professionals, almost 80% say they have ‘good’ or ‘excellent’ career prospects, and more than 84% say the industry is ‘growing’ or ‘booming’. Despite being protected from economic challenges, the report highlights that the industry is still plagued by issues including stress and overwork. 22% of respondents work more than the 48 hours per week mandated by the UK Government, and 8% work more than 55 hours which, according to the World Health Organisation, marks the boundary between safe and unsafe working hours. The reports also found: Worries over workload loom over cyber security professionals - When asked what keeps them awake at night, the two main sources of stress for cyber professionals are day-to-day stress/workload (identified by 50%) and suffering a cyber-attack (32%).


Are enterprise architects the new platform team leaders?

Today there is a need for platform teams to architect the connections between business processes, outcomes, and the technology. Many teams today still operate in silos which can manifest within their specific functional pieces of technology or just individual teams. However, today, there are several key factors reshaping the way teams approach their work. The easy access to technology outside of corporate IT has fundamentally changed the dynamic. In addition, the idea of IT owning a very small piece of the technology is no longer acceptable. For example, if you are a database team, you can’t just be responsible for the database itself – you must also own the delivery of that database as a service, including the additional technology around it like the OS, compute, memory, and all elements of cost, security, access, and performance. Enterprise architects in this new role as platform leaders must adopt a cross-functional mindset. They must look left and right x-functionally at the technology, how it should fit together, the services the company should offer – and for what use cases.



Quote for the day:

"A leader is always first in line during times of criticism and last in line during times of recognition." -- Orrin Woodward

Daily Tech Digest - October 16, 2023

A Holistic Approach to Cyber Resilience

Beyond investing in the right training techniques to build resilience, it is important for security leaders to set up the right culture for cybersecurity and ultimately build a strong cybersecurity foundation. To help meet today’s cybersecurity challenges, organizations should treat cybersecurity as a team sport, working with employees to adopt a collective responsibility mindset throughout the entire organization so as to not place blame or pressure on just the cybersecurity teams. To start building this collective mindset, begin including employees outside of security teams in security training to avoid the blame game when an attack inevitably happens. ... Not only does this help ease the burden security teams feel, but it also ensures that all employees know the appropriate steps to take when encountering a potential threat. By focusing on creating a culture of understanding, employees outside the security team may be more open to learning from these incidents and identifying concerns in the future, ultimately giving your organization a more holistic view of the true state of its cyber resilience.


Why IT projects still fail

Some project leaders list the prevailing do-more-with-less expectation as another reason for failed IT projects today. They say this mentality generally leads to project teams lacking the resources that they need to get the desired work done on time. “Everybody is very concerned with that bottom line, and they should be concerned about that, but the other side of that is they’re expecting a few people to do a lot of things,” Phillips says. For example, she says workers are frequently assigned to multiple projects simultaneously, and many are assigned to that project work on top of their existing duties. As a result, these workers are pulled in too many different directions. Others say enterprise leaders underestimate costs and the time required to complete the work or they fail to allocate the right talent to the team, even as project managers surface the consequences of under-allocating the money, talent, and time needed for success. Experienced project leaders say it’s crucial for IT project managers and CIOs themselves to ensure that the business sponsors and C-suite executives get the information they need to be realistic about the required resources, support, and schedules.


Making sure open source doesn’t fail AI

The biggest difficulty is in defining open source in a world where data and software are so inextricably linked. As Maffulli describes, the most intense discussions among his working group revolve around the dependencies between training data and the instructions on how to apply it. Perhaps not surprisingly, given the complexity and the stakes involved, “there is no strong consensus right now on what that means,” he says. There are at least two approaches, with two primary factions squaring off in the working group. The first tries to stick closely to the comfortable concept of source code, promoting the idea that “source code” gets one-to-one translated to the data set. In this view, the combination of the instructions on how to build the model and the binary code is the source code subject to “open source.” The second faction sees things in a radically different way, believing that you can’t modify code without having access to the original data set. In this view, you need other things to effectively exercise the fundamental freedoms of open source. 


What Are Data Governance Tools, and How Do They Work?

Data governance tools catalog data assets; they collect data from databases, files, applications and other data sources. They then tag data assets based on predefined or custom metadata attributes and classify them based on their sensitivity, importance or relevance to specific compliance regulations. Data governance software ensures that data is accurate, complete and consistent by performing data quality checks and validations. ... Data governance tools help businesses define and manage data ownership, roles and responsibilities as well as implement data security and privacy measures. They ensure data management processes meet regulatory compliance and quality standards. They also help automate the workflow and provide structure to large volumes of data. Data governance tools serve several purposes, which include data quality management to ensure data remains accurate, complete and consistent across an organization. These tools can even be used to enforce compliance with regulatory requirements, such as GDPR and HIPAA. 


Enhancing Enterprise Solutions with SOC as a Service Network Protection

Companies that outsource their SOC activities might benefit from the knowledge, use of cutting-edge technology, and risk assessment of safety professionals. Nearly seventy-one percent of SOC analysts state that they are burned out in their jobs, particularly since there are only a few among them who are in charge of the safety of the entire company. The hackers can take advantage of holes on the infrastructure of a business to gain access unauthorized authorisation or disrupt operations. The threat control and oversight services provided by SOC as a Service aid in identifying and assessing potential risks in OT with IT settings. Owing to the proactive approach, companies are able to tackle problems before they might be used on customers. The tendency to overlook is the process of regularly checking for flaws regarding network infrastructure, software, and users. These analyses also uncover present vulnerabilities and analyze the risks associated with each problem, allowing businesses to choose updates and solutions. SOC as a Service provider not only assists in identifying problems, but additionally in monitoring and resolving those flaws.


Unleashing the Power of AI and ML in Data

Businesses can leverage AI to generate data such as fake reviews and use that information to test and demo a product. This type of demo data generation helps to create a valuable and practical data product that is quick and efficient. One of the key benefits of using AI to generate mock data is that it allows businesses to test and demo data products without having to collect real data from users. ... In forecasting, ML delivers highly automated, finely granular, and more accurate predictions than manual projections. It solves the knowledge risk inherent in organizations where projections are based on “gut feel” and “years of experience.” ML can also pick up on the nuances and subtleties of multiple features playing out in parallel that are invisible to the human eye. ... AI is a powerful technology that can enhance and optimize data analysis, but it doesn’t replace the essential role of software engineers and human expertise. Great technology demands leadership, creativity, empathy, and the ability to navigate complex ecosystems and stakeholders – a uniquely human capacity.


How APAC organisations are tapping generative AI

Across the Asia-Pacific (APAC) region, organisations like GovTech and Culture Amp have been doubling down on GenAI initiatives, more so than other parts of the world. According to a recent study by Enterprise Strategy Group and TechTarget, 75% of APAC respondents plan to adopt generative AI within the next 12 months, with nearly a third already running GenAI workloads in production or are testing the technology. The enthusiasm for generative AI in APAC is also reflected in IT budgets, with over half having allocated budgets to GenAI. Among them, 39% have allocated between 5% and 20% of their IT budget to the technology. The blinding speed of GenAI uptake among APAC organisations is also reflected in the 19% of organisations that are not yet sure if GenAI is a budget item. Nevertheless, the rapid emergence of GenAI as a top IT priority is both impressive and alarming. The study shows that GenAI has become the fifth most important strategic initiative in APAC, trailing behind digital transformation, automation, cyber security, and cost-cutting, and surpassing traditional priorities like cloud and application modernisation.


CISOs and board members are finding a common language

“The C-Suite and board of directors are increasingly relying on CISOs for guidance across a sophisticated threat landscape and changing market conditions,” said Jason Lee, CISO, Splunk. “These relationships provide CISOs the opportunity to become champions who strengthen an organization’s security culture and lead teams to become more cross-collaborative and resilient. By communicating key security metrics, CISOs can also guide boards on adopting emerging technologies, such as generative AI, to help improve cyber defense management and prepare for the future.” ... In 47% of organizations surveyed, the CISOs are now reporting directly to the CEO, indicating a closer relationship with the C-Suite and their respective governing boards. Boards of directors are increasingly looking to CISOs to guide cybersecurity strategy, offering an opportunity for CISOs to articulate value and fill in communication gaps. Numerous CISOs across many industries report regular participation in board meetings, including technology (100%), government (100%), communications and media (94%), healthcare (88%) and manufacturing (86%).


Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies

Enterprise risk executives should start by implementing clear rules prohibiting employees from using any unapproved web applications and tools. “It’s really another instance of shadow IT, which includes any IT-related purchases, activities or uses that the IT department is unaware of and which has historically been a big problem in most organizations,” Stevens says. When employees use approved GenAI tools, the company needs rules governing what data can -- and, more importantly, cannot -- be used with the tool. “But these rules shouldn’t be limited to only GenAI tools,” she adds. “They should be in place for all tools and applications used in the organization.” These execs should partner with any key stakeholders who might use GenAI tools. Stevens says ideally, the organization has a CISO, with the infosec organization a key stakeholder for every application that accesses and stores data or lives within the company’s network and ecosystem.


How To Use Serverless Architecture

Imagine an application as being composed of two parts: the frontend, which users interact with, and the backend, which powers the frontend. In serverless architectures, this backend code runs on the infrastructure provided by the cloud service, removing the need for businesses to worry about managing physical servers. While this does simplify things significantly, it doesn’t entirely remove responsibility from the business owner or the developer. There’s still the need to ensure the security of your code, and initial setup is necessary, albeit less time-consuming than traditional server setups. Serverless architectures are also event-driven. When certain events or triggers happen (like an HTTP request or database event, for example), your application responds. While this shifts the security of the physical servers onto the cloud provider, the responsibility for securing your code still lies with you. The building blocks of serverless applications are functions—small pieces of code, each doing a specific task. 



Quote for the day:

"Thinking should become your capital asset, no matter whatever ups and downs you come across in your life." -- Dr. APJ Kalam

Daily Tech Digest - October 15, 2023

Generative AI and the legal landscape: Evolving regulations and implications

So far we’ve seen AI giants as the primary targets of several lawsuits that revolve around their use of copyrighted data to create and train their models. Recent class action lawsuits filed in the Northern District of California, including one filed on behalf of authors and another on behalf of aggrieved citizens raise allegations of copyright infringement, consumer protection and violations of data protection laws. These filings highlight the importance of responsible data handling, and may point to the need to disclose training data sources in the future. However, AI creators like OpenAI aren’t the only companies dealing with the risk presented by implementing gen AI models. When applications rely heavily on a model, there is risk that one that has been illegally trained can pollute the entire product. ... It is clear that CEOs feel pressure to embrace gen AI tools to augment productivity across their organizations. However, many companies lack a sense of organizational readiness to implement them. Uncertainty abounds while regulations are hammered out, and the first cases prepare for litigation.


Cars are a ‘privacy nightmare on wheels’

Apart from data entered directly into a car’s “infotainment” system, many cars can collect data in the background via cameras, microphones, sensors and connected phones and apps. A lot of these data are used, at least in part, for legitimate purposes such as making driving more enjoyable and safer for the driver, passengers and pedestrians. But they can also be supplemented with data collected from other sources and used for other purposes. For instance, data may be collected from your website visit, your test drive at a dealership, or from third parties including “marketing agencies” and “providers of data-collecting devices, products or systems that you use”. ... It’s safe to say car manufacturers generally don’t want privacy laws tightened. The Federal Chamber of Automotive Industries (FCAI) represents companies distributing 68 brands of various types of vehicles in Australia. During the recent review of our privacy legislation, the FCAI made a submission to the Attorney General’s department arguing against many of the privacy law reforms under consideration.


The Impact of AI and Machine Learning in HR: Enhancing Recruitment and Employee Engagement

Amid a new digital landscape, rising employee expectations, and evolving business dynamics, HR professionals contend with a slew of challenges. Adapting HR processes and systems to digital transformation, especially in organisations with legacy systems can prove demanding. HR leaders today grapple with tasks ranging from keeping up with talent acquisition in the digital world and rapidly evolving HR technology such as HRIS, AI Tools, and Data Analytics to boost employee engagement. They also have to adapt to various recruitment strategies to find the right talent in a competitive job market. While navigating these challenges, leaders should also remain vigilant of potential advantages on the horizon. These encompass enhancing the overall employee journey, embracing a variety of learning and growth initiatives, and streamlining decision-making through AI to enhance results while safeguarding efficiency. Furthermore, AI can analyze large amounts of data quickly, empowering decision-makers with useful insights to help them make informed decisions. This data-driven decision-making can result in better resource allocation, better strategy, and increased work satisfaction.


Microsoft to create team dedicated to data center automation and robotics

The move comes a month after a Microsoft Azure outage in Australia was partially blamed on poor software automation. A utility power sag tripped cooling units, shutting them down and causing temperatures to rise. With insufficient staff on site to reboot the units, the automated system shut down servers to protect them from overheating. Instead, the system could have been designed to reboot the cooling units. “We are exploring ways to improve existing automation to be more resilient to various voltage sag event types,” Microsoft said in a post-mortem. Robots and data centers have a long history, with numerous companies and research groups trying to build computers that could look after fellow computers. ... "As far as robotics, our hyperscale data centers are more like warehouses and most of the processes require a robot to navigate to a specific location to perform a task," Google's VP of data centers Joe Kava told DCD in 2021. “However, even as advanced as robotics have become, many of the tests in data centers are much more complicated than in other industries that have employed large-scale robotic implementations."


What does the DPDP Act mean for philanthropy in India?

Given the stringent requirements of the DPDP Act, there’s a pressing need for revisiting and potentially revising the CSR guidelines. Striking a balance between accountability and privacy becomes crucial in ensuring compliance with both CSR and data protection mandates. While accountability remains paramount, it’s time to transition from rigid metrics to narratives of change. By fostering relationships built on mutual respect and shared learning, practices followed by donor organisations can resonate with the ethos of the DPDP Act and nurture a more collaborative philanthropic ecosystem. This necessitates a fundamental rethinking of how social impact can be measured, and shifting the focus from data collection to storytelling and community empowerment. By upholding privacy and agency, as per Sections 6 and 12, the law provides an opening to develop more participatory and human-centred evaluation frameworks. Funders are pivotal in enabling this evolution by modifying expectations, building capacity, and championing new trust-based and collaborative models of assessing progress.


LLMs Demand Observability-Driven Development

With good observability data, you can use that same data to feed back into your evaluation system and iterate on it in production. The first step is to use this data to evaluate the representativity of your production data set, which you can derive from the quantity and diversity of use cases. You can make a surprising amount of improvements to an LLM based product without even touching any prompt engineering, simply by examining user interactions, scoring the quality of the response, and acting on the correctable errors (mainly data model mismatches and parsing/validation checks). You can fix or handle for these manually in the code, which will also give you a bunch of test cases that your corrections actually work! These tests will not verify that a particular input always yields a correct final output, but they will verify that a correctable LLM output can indeed be corrected. You can go a long way in the realm of pure software, without reaching for prompt engineering. But ultimately, the only way to improve LLM-based software is by adjusting the prompt, scoring the quality of the responses, and readjusting accordingly. 


Feds Warn Healthcare Sector of 'NoEscape' RaaS Gang Threats

The developers of NoEscape ransomware are unknown but they claim to have created their malware and associated infrastructure "entirely from scratch," HHS HC3 said. But security researchers have noted that the ransomware encryptors of NoEscape and Avaddon’s are nearly identical, with only one notable change in encryption algorithms, HHS HC3 wrote. "Previously, the Avaddon encryptor utilized AES for file encryption, with NoEscape switching to the Salsa20 algorithm. Otherwise, the encryptors are virtually identical, with the encryption logic and file formats almost identical, including a unique way of 'chunking of the RSA-encrypted blobs.'” While researchers have observed evidence suggesting that NoEscape is related to Avaddon, unlike Avaddon, it has yet to be determined if there is a free NoEscapte decryptor that organizations can utilize to recover the encrypted files, HHS HC3 said. "Until then, unless certain detection and prevention method are put in place, a successful exploitation by NoEscape ransomware will almost certainly result in the encryption and exfiltration of significant quantities of data."


5 Steps For Building Your Enterprise Semantic Recommendation Engine

After creating the supporting data models, the next step in building a semantic recommendation engine is to construct the graph. The graph acts as a database of nodes and connections between nodes (called edges) that houses all of the content relationships defined in the ontology model. Building the graph involves both ingesting and enriching source data. Ingestion maps raw data to nodes and edges in the graph. Enrichment appends additional attributes, tags, and metadata to enhance the data. This enriched data is then be transformed into semantic triples, which are subject-predicate-object structures that capture relationships. In our example, the healthcare provider could transform their enriched data into triples that capture the relationships between diagnoses and medical subjects, and medical subjects and content. Converting data into a web of semantic triples and loading it into the graph enables efficient querying. The knowledge graph’s flexibility also enables continuous integration of new data to keep recommendations relevant. 


Agile Architecture: A Comparison of TOGAF and SAFe Framework for Agile Enterprise Architecture

In TOGAF, the Enterprise Architect plays a vital role in creating, maintaining, and evolving the enterprise architecture of an organization. They are responsible for aligning business and IT strategies, processes, and systems, and ensuring that the architecture supports the organization’s goals and objectives. The Enterprise Architect in TOGAF follows the Architecture Development Method (ADM), a structured approach that guides the creation and implementation of enterprise architecture. ... In SAFe, the Enterprise Architect has a slightly different focus and responsibilities. While the core principles of enterprise architecture remain the same, the Enterprise Architect in SAFe works within the context of Agile development practices and the broader framework of SAFe. They collaborate closely with Agile teams as well as other architect roles and play a crucial role in providing technical leadership, guidance, and support. The Enterprise Architect in SAFe helps teams align their technical solutions with the overall enterprise architecture, ensuring that the architectural vision is realized, and technical debt is managed effectively.


Cyber Insecurity, AI and the Rise of the CISO

Adding to cyber insecurity is the unease in the use of artificial intelligence not only by public employees but by cyber criminals too. It comes as no surprise that artificial intelligence (AI) is being used by cyber criminals to further exploit cyber weaknesses and vulnerabilities. In PTI’s City and County AI Survey, AI was listed as the No. 1 application to help thwart cyberattacks. They recognize how AI can actively scan for suspicious patterns and anomalies as well as assist in remediation and recovery strategies. What’s more AI systems continue to learn and act. Also new this year is the renewed focus on zero trust frameworks and strategies. Zero trust has never been more critical and unfortunately it takes both time and talent to fully comprehend all its dependencies leading towards deployment. This year also saw for the first time in years the National Institute of Standards and Technology (NIST) has modified its Cybersecurity Framework to include an underlying layer of governance in each of its traditional five pillars. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - October 14, 2023

What is tokenization?

Tokenization is the process of issuing a digital representation of an asset on a (typically private) blockchain. These assets can include physical assets like real estate or art, financial assets like equities or bonds, nontangible assets like intellectual property, or even identity and data. Tokenization can create several types of tokens. Stablecoins, a type of cryptocurrency pegged to real-world money designed to be fungible, or replicable, are one example. Another type of token is an NFT—a nonfungible token, or a token that can’t be replicated—which is a digital proof of ownership people can buy and sell. Tokenization is potentially a big deal. Industry experts have forecast up to $5 trillion in tokenized digital-securities trade volume by 2030. There’s been hype around digital-asset tokenization for years, since its introduction back in 2017. But despite the big predictions, it hasn’t yet caught on in a meaningful way. We are seeing slow movement: US-based fintech infrastructure firm Broadridge now facilitates more than $1 trillion monthly on its distributed ledger platform.


MVP or TVP? Why Your Internal Developer Platform Needs Both

“TVP is about ‘thinness’ to try and avoid a massive platform. TVP is something that remains throughout an organizational evolution — it should always be the thinnest viable — whereas MVP is normally the first stage of something larger.” This shift toward investment in long-term thinness is extremely important. Gregor Hohpe calls this a “sinking platform” in his 2022 PlatformCon talk “The Magic of Platforms.” ... You can leave your platform the same because you invested all this kind of money, and we call this a sinking platform as the water level rises, right; it might be justified from investment, but you are kind of duplicating things that are now available in the base platform.” Hohpe goes on to describe how platform teams need to intentionally decide on their philosophy when it comes to supporting their platform: “Or you build a ‘floating platform’ where, when the base platform gains the capabilities you have built, you say ‘Oh, perfect! I don’t need my part anymore. I can let the base platform handle that, and I can innovate further on top. I build new things.'”


7 Blockchain Technology Mistakes You Should Watch Out For

The application of Blockchain for secure information exchange and storing records leads to many wrong beliefs. CIOs get confused between Data Base Management Systems (DBMS) and blockchain. The existing blockchain platforms cannot provide support for complex data models and do not provide assurance of high throughput or low latency. They were built to provide an immutable, authoritative, and trusted record of events among a dynamic assortment of unrelated stakeholders. ... Smart Contract is a code that automatically executes legally relevant events and actions that are part of the agreement. The main utility of Smart Contracts is to reduce the need for trusted intermediaries, prevent fraud and reduce arbitration costs. They are commonly associated with cryptocurrencies like Bitcoin and are fundamental building blocks of Decentralized Finance (DeFi) applications. Although, at present Smart Contracts are not necessarily an agreement that has been approved by law, with some countries being an exception.


Practicing Good Green Governance Leads to Profits

Let’s begin by defining green governance. It refers to a set of principles and practices aimed at promoting environmental sustainability and responsible management of natural resources within a clear governance and decision-making framework. A green-minded corporation should integrate environmental considerations into policies, regulations, and actions throughout all divisions of its business. Green governance aims to balance economic and environmental practices to create a profitable and sustainable future. ... Practicing green governance requires a holistic approach that considers the interconnectedness of environmental, operational, and economic systems to balance human needs and the health of the planet with the company’s bottom line and valuation. That balance is what helps ensure a sustainable and prosperous future for all stakeholders. ... Many companies want to showcase their greenness in a credible and trustworthy way but find the current system of backward-looking, voluntary standards and the myriad of ESG metrics to be daunting, arduous, and costly.


The Future is Now: IoT and the Evolution of Business Computing

The proliferation of IoT devices and sensors is generating massive amounts of data that provides invaluable insights for business decision-making. However, organizations need talent to properly analyze and derive meaning from these huge IoT datasets. A business management and accounting online degree is valuable in helping to develop the analytics skills needed to fully capitalize on IoT capabilities. These programs prepare the next generation of data-driven business leaders who will drive transformative change through IoT adoption. With access to real-time data from across the enterprise, managers can gain unprecedented visibility into operations. Marketers can analyze IoT data to understand customer behavior patterns and rapidly adjust campaigns. Supply chain personnel can identify and resolve bottlenecks as they occur. Executives can track core business metrics in real time to guide strategic decisions. The sheer volume of IoT data brings a paradigm shift in business computing where decisions are proactive, not reactive.


Psychological safety at the workplace

People show up at work with different states of mental well-being. So, empathy is absolutely non-negotiable. A meaningful way to be empathetic is to be mindful of our language and its impact on the other person. For instance, instead of the confrontational approach where one might say, “Your code is quite bad and not what I expected” say, “I know that you are capable of writing great code. Let’s figure out what happened this time.” This manner of checking in with each other on their state of mind and creating a space for team members to discuss their mental health without fear of judgment is a move in the right direction. ... Welcome different perspectives, and when people offer them, disagree with respect. People tend to cushion their ideas when they fear judgment. For instance, they might say, “this is probably a silly idea,” or “this may be a dumb question.” Reassure them that all ideas are welcome. Watch out for groupthink — the tendency of the minority to stay silent in order not to upset the majority. Invite opinions from everyone. 


The future of augmented reality is AI

Whenever we in the tech media or tech industry think or talk about AR, we tend to focus on what kind of holographic imagery we might see superimposed on the real world through our AR glasses. We imagine hands-free Pokémon Go, or radically better versions of Google Glass. But since the generative AI/LLM-based chatbot revolution struck late last year, it has become increasingly clear that of all the pieces that make up an AR experience, holographic digital virtual objects is the least important. The glasses are necessary. Android phones and iPhones have had “augmented reality” capabilities for years, and nobody cares because looking at your phone doesn’t compare to just seeing the world hands-free through glasses. The cameras and other sensors are necessary. It’s impossible to augment reality if your device has no way to perceive reality. The AI is necessary. We need AI to interpret and make sense of arbitrary people, objects, and activity in our fields of view.


How to maintain a harmonious workplace atmosphere in multigenerational firms

Ensuring the well-being of a multigenerational workforce is crucial for any organisation. HR can play a key role in this by implementing policies and programs that cater to the unique needs and preferences of different generations. For instance, offering flexible work arrangements, mentoring programs, and personalised professional development opportunities can help employees of all ages feel valued and supported. Additionally, providing access to resources and benefits that address specific health and wellness concerns can help ensure that employees stay healthy and productive throughout their careers. “By prioritising the well-being of all employees, regardless of age or background, organisations can create a more inclusive and supportive workplace environment that promotes work-life balance. Creating a diverse, equitable, and inclusive workplace is essential for fostering a positive and productive work environment. 


Oh No, the Software Consultants Are Coming!

Sadly, consultants are still used to back up a decision that has already been made by management. So a sudden presence of consultants is often viewed as positively as the arrival of sharks around a stalled boat. But in most cases, consultants are just hired to see why an area is not performing in some way. It is perfectly common for them to tell management that they are the problem. That might shorten the engagement, but you can do that sort of thing when you are not an employee. More realistically, consultants might need to explain to staff why systematic changes will improve the company’s prospects, which still leaves the unspoken threat about what happens if things don’t change. And yet, many developers do fall into ruts and moving on may truly be the best thing to do. And of course, escaping a death march project is not always the worst thing that can happen. By the way, if you are staff, always ask consultants for career advice. Not only is it free, but it won’t be biased by your background or colored by employer motives.


CBDC and stablecoins: Early coexistence on an uncertain road

It is too early to confidently forecast the trajectory and endgame for CBDCs and stablecoins, given the multitude of unresolved design factors still in play. For instance, will central banks focus first on retail or wholesale use cases, and emphasize domestic or cross-border applications? And how rapidly will national agencies pursue regulation of stablecoins prior to issuing their own CBDCs? To begin to understand some of the potential scenarios, we need to appreciate the variety and applications of CBDCs and stablecoins. There is no single CBDC issuance model, but rather a continuum of approaches being piloted in various countries. ... At the opposite end of the spectrum, China’s CBDC pilot relies on private-sector banks to distribute and maintain eCNY (digital yuan) accounts for their customers. The ECB approach under consideration involves licensed financial institutions each operating a permissioned node of the blockchain network as a conduit for distribution of a digital euro.



Quote for the day:

"Anything is possible when you have the right people there to support you." -- Misty Copeland

Daily Tech Digest - October 13, 2023

Meet the New DevSecOps

AI-powered DevSecOps for the enterprise merges AI capabilities that evolved separately until now:AI-powered software delivery workflows enable software delivery processes to automate large-scale programs like app modernization and cloud migrations. AI-based code governance helps developers use code-assist and other generative AI tools to speed up the writing, checking and optimizing of traditional code. Predictive intelligence applies machine learning algorithms to data across the entire software development and delivery life cycle (SDLC) so managers gain earlier software delivery insights in order to forecast capacity, foresee risks and respond to changes. The reality is that when AI solutions are implemented — often in piecemeal fashion among smaller teams — they add to the clutter of siloed and fragmented tools, methods and processes that will eventually bite back on the short-lived perception of “progress.” A truly systemic approach to DevSecOps that merges and leverages AI capabilities at scale is one of the most important adjustments an enterprise can make for a lasting advantage in today’s AI-augmented world.


Navigating Data Management At a Strategic & Tactical Level

In Mexico, the biggest challenge is determining the part of the data management strategy to focus on first. Some data integration strategies seek to purchase technology in the hope that it will solve all their problems. However, it is essential to first organize the data before attempting to solve everything in a single move or through a single technology. Before developing a Master Data Management (MDM) strategy, companies must first sit down and identify what they want from it. The data strategy should come from the top of the organization because it will require a cultural shift to implement it correctly. ... Having technological expertise alone is not enough to help the client capitalize on the collected data. Alldatum focuses on the human side of technology, which is sometimes overlooked. We want to help clients extract value from information because if they succeed in their data strategies, their company will grow and they will make better decisions. If they make better decisions, there will be more job opportunities, which will positively impact the country's economy.


The impact of artificial intelligence on software development? Still unclear

It's common for experts to suggest these days that AI will deliver significant boosts to software development and deployment productivity, along with developer job satisfaction. "So far our survey evidence doesn't support this," the report's authors, Derek DeBellis and Nathen Harvey, both with Google, state. "Our evidence suggests that AI slightly improves individual well-being measures -- such as burnout and job satisfaction -- but has a neutral or perhaps negative effect on group-level outcomes such as team performance and software delivery performance." These flat findings are likely due to the fact that we're still at the early stages of AI adoption, they surmise: "There is a lot of enthusiasm about the potential of AI development tools, as demonstrated by the majority of people incorporating at least some AI into the tasks we asked about. But we anticipate that it will take some time for AI-powered tools to come into widespread and coordinated use in the industry."


Quantum risk is real now: How to navigate the evolving data harvesting threat

The real emphasis of HNDL threats is on high-value, long-term data assets like trade secrets or intellectual property, which are passively harvested from large-scale data access points rather than personal WiFi hotspots. In essence, if a device is likely to possess important actionable information of near-term value, it’s more likely to be attacked immediately rather than be subjected to a longer-term HNDL strategy. Given the sensitive nature of the data at stake — from personal information to state secrets — the HNDL risk poses a severe threat. ... Understanding quantum security is essential in mitigating the risk of HNDL attacks. Once asymmetric encryption, which is currently not quantum-safe, is broken, session keys and symmetric keys will be exposed. Therefore, mitigation involves either using quantum-secure encryption or eliminating the transmission of encryption keys altogether. It’s essential to clear up a common misconception: While Advanced Encryption Standard (AES) is often touted as quantum-safe, the security of AES often hinges on the RSA mechanism — a type of asymmetric encryption — used to distribute its keys, which is not quantum-safe.


What is a data architect? Skills, salaries, and how to become a data framework master

Data architects are senior visionaries who translate business requirements into technology requirements and define data standards and principles, often in support of data or digital transformations. The data architect is responsible for visualizing and designing an organization’s enterprise data management framework. This framework describes the processes used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. The data architect also “provides a standard common business vocabulary, expresses strategic requirements, outlines high-level integrated designs to meet those requirements, and aligns with enterprise strategy and related business architecture,” according to DAMA International’s Data Management Body of Knowledge. ... The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible for visualizing the “blueprint” of the complete framework that data engineers then build.


How Agile Teams Can Improve Predictability by Measuring Stability

The stability metric, Ψ, is a really simple calculation that can be done on the back of an envelope. It has two inputs: the arrival rate, λ, and the service rate, μ. The arrival rate is the number of PBIs added to a system in a period of time. The service rate, μ, is the number of PBIs successfully done by the team in the same period of time. With these two inputs you can calculate the dimensionless Ψ by just dividing the service rate by the arrival rate. When Ψ is less than one, the system is unstable; when it is greater than one, it is stable, and when it is equal to one it is optimally stable. When Ψ is equal to one, the average arrival rate is equal to the average service rate and Little’s law applies. In this state, the backlog is neither growing nor shrinking over time and the average time an item will spend before it is done can be calculated by dividing the total number of items in the system, L, by the arrival rate, λ. When Ψ is less than one, then items are arriving faster than they can be dealt with and the backlog is growing.


DarkGate Operator Uses Skype, Teams Messages to Distribute Malware

Trend Micro's analysis showed that once DarkGate is installed on a system, it drops additional payloads. Sometimes those are variants of DarkGate itself or of Remcos, a remote access Trojan (RAT) that attackers previously haveused for cyber-espionage surveillance and for stealing tax-related information. Trend Micro said it was able to contain the DarkGate attacks it observed before any actual harm came to pass. But given the developer's apparent pivot to a new malware leasing model, enterprise security teams can expect more attacks from varied threat actors. The objectives of these adversaries could vary, meaning organizations need to keep an eye out for threat actors using DarkGate to infect systems with different kinds of malware. While the attacks that Trend Micro observed targeted individual Skype and Teams recipients, the attacker's goal clearly was to use their systems as an initial foothold on the target organization's networks. "The goal is still to penetrate the whole environment, and depending on the threat group that bought or leased the DarkGate variant used, the threats can vary from ransomware to cryptomining," according to Trend Micro.


Managing a Freelance Data Science Team

Freelance data scientists are a unique breed. They combine the technical skills of a data scientist with the entrepreneurial mindset of a freelancer. They are highly self-motivated, disciplined, and adaptable, able to navigate the uncertainties of freelance work while maintaining a high level of professional expertise. In terms of technical skills, freelance data scientists typically have advanced degrees in fields like statistics, computer science, or data science, and have a deep understanding of machine learning algorithms, statistical modeling, data visualization, and programming languages like Python and R. They are also adept at using data science tools and platforms like Hadoop, Spark, and Tableau. However, what sets freelance data scientists apart is their ability to operate independently. They are comfortable with remote work, adept at managing their time, and capable of maintaining strong relationships with clients. They are also often more up-to-date with the latest industry trends and technologies, as they need to constantly upskill to remain competitive.


Researchers: The Future of AI Is Wide, Deep, and Large

In their research, the team considered the relationship between deep and wide neural networks. Using quantitative analysis, they found that deep and wide networks can be converted back and forth on a continuum. Using both will give a bigger picture and avoid bias. Their research hints at the future of machine learning, in which networks are both deep and wide and interconnected with favorable dynamics and optimized ratios between width and depth. Networks will become increasingly complicated, and when dynamics reach the desired states, they will produce amazing outcomes. “It’s like playing with LEGO bricks,” said Wang. “You can build a very tall skyscraper or you can build a flat large building with many rooms on the same level. With networks, the number of neurons and their interconnection are the most important. In 3D space, neurons can be arranged in myriad ways. It’s just like the structure of our brains. The neurons just need to be interconnected in various ways to facilitate diverse tasks.”


Why we need to focus AI on relationships

Granted, you also need to onboard the AI to make sure employees trust the tool and will use it. This can be problematic, given that many employees fear that AI will replace them. A Harvard Business Review article provided guidance on how to do this properly, suggesting that the AI be set up to succeed following known successful procedures first and then advancing the interaction as the employee becomes comfortable with the tool. In other words, you start by making the AI into an assistant that enhances and helps the employee, then allow it to become a monitor/mentor as it provides real-time feedback on how the employee is interacting with others in the company. The next phase is the coach, where the AI becomes proactive and able to provide more detailed feedback at times of the employee’s choosing, with the final phase being that the AI becomes a teammate, able to autonomously do things on behalf of the human/AI team. During this evolution of the personal AI, the employee trains the AI and the AI trains the employee, so they become two parts of a more productive team.



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - October 12, 2023

Bridging the AI-Human Divide: AI as Your Operations Teammate

When viewed through the lens of collaboration rather than competition, AI should free people to do what they are uniquely good at — creative problem-solving, collaborating and using judgment to solve complex challenges. This “human-in-the-loop” approach to AI can alleviate people from daily-grind tasks that are time-consuming, repetitive, and often lead to burnout. Envision a scenario where your AI teammate seamlessly integrates into your operational workflows, becoming a trusted assistant who handles time-consuming tasks. This teammate has a comprehensive understanding of your operations, understands the contextual importance of data and can deliver that data exactly when needed in the forms of metrics, graphs and recommended actions. Imagine an AI teammate that collects and sorts a massive amount of data, producing concise summaries for human analysis. Or AI that provides you with information that detects and contextualizes incidents. As potent as AI is in dissecting vast data sets, spotting patterns, and rendering contextual analysis, it’s still in its infancy. 


Chatbots in the future of customer service

Customers often feel like they must jump over hurdles, trying different phrase combinations to explain what they need to get a rules-based chatbot — which is typically defined with existing mapped-out responses — to understand their request. Generative AI addresses this pain point, by continuously training and optimising chatbots to deliver a far more sophisticated, personalised level of customer support. Through conversational interactions, the technology can provide intelligent support for faster issue resolution while increasing self-solve rates to better streamline processes for the human agents. Generative AI’s ability to formulate responses based on historical insights, such as past behaviour and user profile problems, is a key differentiator for business leaders looking to win customer loyalty. Considering the majority of customers say it’s important organisations understand them, with 65 per cent wanting agents to resolve problems easily, it’s an impossible demand to meet without integrating intelligent solutions. 


Treat generative AI like a burning platform and secure it now

“As generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon,” wrote Chris McCurdy, worldwide vice president & general manager with IBM Security in a blog about the study. For network and security teams, challenges could include having to battle the large volumes of spam and phishing emails generative AI can create; watching for denial-of-service attacks by those large traffic volumes; and having to look for new malware that is more difficult to detect and remove than traditional malware. “When considering both likelihood and potential impact, autonomous attacks launched in mass volume stand out as the greatest risk. However, executives expect hackers faking or impersonating trusted users to have the greatest impact on the business, followed closely by the creation of malicious code,” McCurdy stated. There’s a disconnect between organizations’ understanding of generative AI cybersecurity needs and their implementation of cybersecurity measures, IBM found.


Technical debt has hindered UK innovation

With technical debt reaching an extreme tipping point, it is important to learn from how we got here. As businesses are driven to provide better and more customized solutions for their customers, they are competing for a limited pool of developers who can help them navigate complex IT infrastructure and operations. The research found that the two leading factors behind technical debt were the high number of development languages, which makes it difficult to maintain and upgrade systems, as well as steep turnover in development teams. This results in new hires becoming responsible for platforms they did not create and may not fully understand. Other significant factors include companies accepting known defects in order to meet deadlines, and the presence of outdated development languages and frameworks. As a result, businesses struggle to maintain and rework critical systems. Over time, technical debt builds up and compounds through thousands of seemingly small decisions, before becoming a major problem preventing companies from investing in new innovations or services.


How ChatGPT and other AI tools could disrupt scientific publishing

Many editors are concerned that generative AI could be used to more easily produce fake but convincing articles. Companies that create and sell manuscripts or authorship positions to researchers who want to boost their publishing output, known as paper mills, could stand to profit. A spokesperson for Science told Nature that LLMs such as ChatGPT could exacerbate the paper-mill problem. One response to these concerns might be for some journals to bolster their approaches to verify that authors are genuine and have done the research they are submitting. “It’s going to be important for journals to understand whether or not somebody actually did the thing they are claiming,” says Wachter. At the publisher EMBO Press in Heidelberg, Germany, authors must use only verifiable institutional e-mail addresses for submissions, and editorial staff meet with authors and referees in video calls, says Bernd Pulverer, head of scientific publications there. But he adds that research institutions and funders also need to monitor the output of their staff and grant recipients more closely.


Israel-Hamas conflict extends to cyberspace

There have also been cyberattacks targeting Palestine by an India-based hacktivist group called Indian Cyber Force. The group has shown solidarity with Israel in the current conflict and has taken responsibility for bringing down the websites of Hamas, Palestine National Bank, Palestine Web Mail Government Services, and Palestine Telecommunication Company. "Indian Cyber Force has previously initiated several cyber campaigns in support of India. Their previous targets include Bangladesh and Canada. It appears that Bangladesh was targeted regarding their relationship with Pakistan," Flashpoint said in a report. ... India's open support for Israel in the ongoing conflict has also dragged the country into this cyber warfare. Several hacktivist groups objected to India's support for Israel in the current conflict and its departure from a traditional neutral stance in the Israel-Palestine conflict. Hacktivist group Ghosts of Palestine posted a message on the Telegram channel claiming to target India. The message clearly highlighted that the cause of the attacks was India's support for Israel.


How To Improve Your Data Pipeline Security Credit Score

What do we mean when discussing security tech debt in your data pipeline? This refers to building a data pipeline that doesn't have the scalability to govern data access or secure data confidently across all of your target databases. This can take a couple of forms: - Data teams or line of business leaders are so incentivized to get the data into the cloud for data insights that they build a pipeline to move all of the data without regard to which data is sensitive and should be governed according to privacy regulations. This lets companies use the data, but it's the riskiest. - Security teams get wind of this and slam the brakes on the data teams' plans to move data. Maybe data teams can move the easy or safe data so they go forward without sensitive data to keep the project moving. However, suppose they haven't identified all of the places where sensitive data exists across their data sources. In that case, the security team might decide it's safer to migrate nothing at all, locking everything down under their existing security controls.


IT Leaders in Banking & Finance Prepare for Business in the Metaverse

Several banks are setting up lounges or virtual branches as an entry point to the metaverse and using the space to establish a presence and nurture customer relationships. Offering education, support, and advice on financial products in the metaverse can enable financial services brands to engage Gen Z even as VR banking matures. HSBC, for example, purchased virtual real estate in The Sandbox to engage and connect with sports, e-sports, and gaming enthusiasts. Is this the right idea? IT leaders attending The Future of Banking event had mixed feelings regarding virtual banking services. They expressed skepticism about the likelihood of adoption without a specific incarnation of virtual offerings that fires the customer’s imagination. Banks will need to give customers compelling reasons to go to the metaverse to complete actions they can already do with mobile banking applications or develop actions they cannot experience with mobile or web interfaces. The next biggest hurdle will be understanding what that will look like across the industry.


TCS builds the first digital heart for a professional runner

A digital heart is a high-fidelity multiscale computational model of the cardiac system. It enables insight into myocardium mechanisms, subcellular mechanisms, electromechanical activation (generation and conduction of cardiac electrical potential leading to cardiac muscle contraction), and hemodynamics (valvular functions, chamber pressures, myocardial wall tension, and coronary blood flow) on both the micro and macro scales. TCS creates a digital heart of a number of data sets, including an MRI. With the data from an MRI alongside various historical and speculative data sets, a functioning heart is modeled in a virtual environment. By applying AI/ML and other analysis, users can see the impact of different conditions and situations such as beginning a long-term exercise program or effect of medication. After a digital twin of a heart is created, researchers can go a step further and use 3D printing to create a physical version of a heart. A 3D printed digital twin heart allows a doctor to practice surgical techniques and test solutions such as new heart valves or drugs without ever touching an actual body.


Keeping up with the demands of the cyber insurance market

While insurers are bringing new products to the market, they are increasingly tightening the requirements for prospective and existing policy holders for the cyber risks they underwrite, asking organizations to demonstrate a high level of security preparedness to gain coverage. In this scenario, thorough planning ahead of the application process ensures that organizations are in the best position to get coverage and reap the benefits of their policy. So, what are the priorities and the key security factors at play to ensure organizations can improve their chances of qualifying? ... Additionally, insurers pay close attention to incident response plans, anticipating a robust strategy that aligns IT, security, and developers for a swift, effective reaction to cyber threats. Devising thorough plans, with role checklists and response measures, and organizing regular simulation exercises will enhance organizations’ incident readiness and show insurers that they are genuinely prepared. Finally, post-attack recovery plans also play a significant role in coverage viability. 



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick