Daily Tech Digest - February 28, 2024

3 guiding principles of data security in the AI era

Securing the AI: All AI deployments – including data, pipelines, and model output – cannot be secured in isolation. Security programs need to account for the context in which AI systems are used and their impact on sensitive data exposure, effective access, and regulatory compliance. Securing the AI model itself means identifying model risks, over-permissive access, and data flow violations throughout the AI pipeline. Securing from AI: Just like most new technologies, artificial intelligence is a double-edged sword. Cyber criminals are increasingly turning to AI to generate and execute attacks at scale. Attackers are currently leveraging generative AI to create malicious software, draft convincing phishing emails, and spread disinformation online via deep fakes. There’s also the possibility that attackers could compromise generative AI tools and large language models themselves. ... Securing with AI: How can AI become an integral part of your defense strategy? Embracing the technology for defense opens possibilities for defenders to anticipate, track, and thwart cyberattacks to an unprecedented degree. AI offers a streamlined way to sift through threats and prioritize which ones are most critical, saving security analysts countless hours. 


Web3 messaging: Fostering a new era of privacy and interoperability

Designed to be interoperable with various decentralized applications (DApps) and blockchain networks, Web3 messaging protocols enable developers to seamlessly integrate messaging functionality into their decentralized services — a stark contrast to their traditional equivalents that host closed ecosystems, which limit communication with users on other platforms. Beoble, a communication infrastructure and ecosystem that allows users to chat between wallets, is one of the Web3 messaging platforms ready to change how people use digital communication. The platform comprises a web-based chat application and a toolkit for seamless integration with DApps. Dubbed “WhatsApp for Web3,” Beoble removes the need for login methods like Twitter or Discord, instead mandating only a wallet for access. Users can log in using their wallets and send texts, images, videos, links and files across blockchain networks. Blockchain app users can utilize emojis and nonfungible token (NFT) stickers in their digital communication with Beoble, adding a layer of personality to their conversations. 


As data takes center stage, Codified wants to bring flexibility to governance

As Gupta sees it, many large companies are authoring policies and trying to implement them in various ways, but he sees software that is too rigid for today’s use cases, leaving them vulnerable, especially when they have to change policy. He wants to change that by translating policy into code that can be implemented in a variety of ways, connected to various applications that need access to the data, and easily changed when new customers or user categories come along. “We let you author policies in natural language, in a declarative way or using a UI - pick your favorite way - but when those policies are authored, we can codify them into something that can be implemented in a number of ways and can be very easily changed,” he said. To that end, the company also enables customers to set conditions, such as whether you’ve had security training in the last 365 days, or you’re already part of a team working on a sensitive project. Ultimately, this enables companies to set hard-coded data access rules based on who the employee is and the applications they are using or projects they are part of, rather than relying on creating groups on which to base these rules.


Looking Forward, Looking Back: A Quarter Century as a CISO

The first distributed denial of service (DDoS) attack occurred in 1999, followed by Code Red and Nimda worm cyberattacks that targeted web servers in 2001, and SQL Slammer in 2003 which spread rapidly and brought focus on the need to patch vulnerable systems. The end of the millennium also brought Y2K and the Millennium Bug, which exposed the vulnerability of existing computing infrastructures that formatted dates with only the final two digits and raised the profile of CISOs and other security professionals. Organizations recognized the necessity of dedicated executives responsible for managing cybersecurity risks. ... CISOs were soon making the news, and not always in a good way. Former Uber CISO Joe Sullivan was found guilty of felony obstruction of justice and concealing a data breach in October 2022. The following month, CISO Lea Kissner of Twitter (now X) resigned along with the company’s chief privacy officer and its chief compliance officer over concerns that Twitter’s new leadership was pushing for the release of products and platform changes without effective security reviews.


How Generative AI is Revamping Digital Transformation to Change How Businesses Scale

Crucially, generative AI can help to tailor the dining experience for customers in a way that significantly improves the quality of in-house or takeaway eating. This is achieved by GenAI models analyzing data like guest preferences, dietary restrictions, past orders, and behavior to offer personalized menu items and even recommend food and drink pairings. Generative AI will even be capable of using available datasets to generate offers on the fly as an instant call-to-action (CTA) if it deems an online visitor isn't yet ready to convert their interest into action. We're already seeing leading global restaurants announce the implementation of generative AI for their processes. ... Generative AI became the technological buzzword of 2023, and for good reason. However, there will be many hurdles to overcome in the development of the technology before it drives widespread digital transformation. Regulatory hurdles may be tricky to overcome due to issues in how AI programs can handle private data and utilize intellectual property (IP). Quality shortcomings could also cause issues in governance among early LLMs, and we've seen plenty of cases where language models "hallucinate" when dealing with unusual queries.


NIST CSF 2.0 released, to help all organizations, not just those in critical infrastructure

The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others, such as finance and reputation. “Developed by working closely with stakeholders and reflecting the most recent cybersecurity challenges and management practices, this update aims to make the framework even more relevant to a wider swath of users in the United States and abroad,” according to Kevin Stine, chief of NIST’s Applied Cybersecurity Division. ... The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond, and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk. The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users...


Even LLMs need education—quality data makes LLMs overperform

Like any student, LLMs need a good source text to produce good outputs. As Satish Jayanthi of CTO and co-founder of Coalesce told us, “If there were LLMs in the 1700s, and we asked ChatGPT back then whether the earth is round or flat and ChatGPT said it was flat, that would be because that's what we fed it to believe as the truth. What we give and share with an LLM and how we train it will influence the output.” Organizations that operate in specialized domains will likely need to train or fine-tune LLMs of specialized data that teaches those models how to understand that domain. Here at Stack Overflow, we’re working with our Teams customers to incorporate their internal data into GenAI systems. When Intuit was ramping up their GenAI program, they knew that they needed to train their own LLMs to work effectively in financial domains that use tons of specialized language. And IBM, in creating an enterprise-ready GenAI platform in watsonx, made sure to create multiple domain-aware models for code, geospatial data, IT events, and molecules.


State of FinOps 2024: Reducing Waste and Embracing AI

Engineers remain the biggest beneficiary of FinOps observability, even though "engineering enablement" has dropped to a lower position in the report's ranking of surveyed priorities. This indicates that engineers are those best suited to responding to a sudden change in cost metrics. The report observes that the "engineering persona" is reported as getting the most value from both "FinOps training and self-service reporting." ... While waste reduction is a common driver across all respondents, segmenting the survey by cloud spend revealed that those with smaller budgets would tend to then prioritise improvements in the accuracy of billing forecasts. The report states that these respondents faced the challenge of understanding "the trajectory of spending" prior to it "getting out of hand." Most invested in low-effort solutions such as "manual adjustments" to generated forecast data. In contrast, those with larger budgets tended to prioritise the optimisation of commitment-based discounts to benefit from economies of scale. This included the right-sizing of "reserved instances, savings plans, committed use discounts," as well as specific negotiated discounts.


How to Develop an Effective Governance Risk and Compliance Strategy

“Overcoming silos and fostering communication needs to begin at the top,” Rothaar, says in an email interview. Furthermore, aligning GRC goals with broader business objectives ensures both executive management and individual departments recognize the impact that GRC initiatives have on organizational success. “Promoting a culture of communication with open dialogue and knowledge-sharing is essential to a successful and efficient GRC strategy,” she says. Ringel says organizations need to promote awareness and engagement with risk and compliance, because they influence every member of the organization. “You are only as strong as your weakest link when it comes to risk, so making sure everyone is on the same page and treating risk and compliance smartly is key,” she explains. Compliance is less directly obvious, but if those values are not communicated through every department--product design, development, customer support, marketing, and sales -- the end product will reflect that disconnect. “Not every employee needs to know specific regulations, but everyone needs to share the values of data governance and compliance,” Ringel says.


Data storage problems and how to fix them

When undertaking the journey to digitisation, it’s important to consider the issues and challenges and more importantly – know how to avoid them. ... It’s wise not to attempt a massive data overhaul all at once, especially before you’ve considered what data is valuable, how and where you will store the data and investigated the different options and models available. It all depends on the scope of transformation and the state the organisation is in. For start-ups, it’s a green field and the experience is as good as the plan and its periodic inspection and adaptation. For organisations with historic data to migrate, it can get complex. I have experienced both and the key was to have identified what data is valuable, a clear cut off date and policy on how far back we digitise. ... If you are unsure on where to start, consult an expert to determine the best solutions and view the initial costs as an investment. Digital transformation of data brings the benefits of creating efficiency and timesaving and with those, reduced costs. The long-term benefit can far outweigh the upfront costs. Digital systems are typically faster and more efficient than manual systems. 



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - February 27, 2024

Market incentives in the pursuit of resilient software and hardware

For cyber security to continue to evolve as a discipline, we need both quantitative and qualitative insights to understand those aspects that, when combined, work most effectively to address threat and risk, along with human factors and operational dimensions. These solutions then need to be coupled with a compelling narrative to explain our conclusions and objectives to a range of audiences. For the quantitative aspects, access to underlying data types and sources is critical. When we think about software and hardware specifically, there are many possible points of measurement which can contribute to our understanding of its intrinsic security and support assurance. ... Improving the resilience of our software and hardware technology stacks in ways that can scale globally is a multi-faceted, sociotechnical challenge. Creating the right market incentives is our priority. Without these in place, we cannot begin to make progress at the pace or scale we need. Our collective interventions to improve engineering best practices and more transparent behaviours must be driven by data, and targeted by research and innovation. All of this requires better access to skills and cyber education, improved tools, and accessible infrastructure. 


Is creating an in-house LLM right for your organization?

Before delving into the world of foundational models and LLMs, take a step back and note the problem you are looking to solve. Once you identify this, it’s important to determine which natural language tasks you need. Examples of these tasks include summarization, named entity recognition, semantic textual similarity, and question answering, among others. ... Before using an AI tool as a service, government agencies need to make sure the service they are using is safe and trustworthy, which isn’t usually obvious and not captured by just looking at an example set of output. And while the executive order doesn’t apply to private sector businesses, these organizations should take this into consideration if they should adopt similar policies. ... Your organization’s data is the most important asset to evaluate before training your own LLM. Those companies that have accumulated high-quality data over time are the luckiest in today’s LLM age, as data is needed at almost every step of the process including training, testing, re-training, and beta tests. High-quality data is the key to success when training an LLM, so it is important to consider what that truly means. 


Privacy Watchdog Cracks Down on Biometric Employee Tracking

In Serco's case, the ICO said Friday that the company had failed to demonstrate why using facial recognition technology and fingerprint scanning was "necessary or proportionate" and that by doing so it had violated the U.K. General Data Protection Regulation. "Biometric data is wholly unique to a person so the risks of harm in the event of inaccuracies or a security breach are much greater - you can't reset someone's face or fingerprint like you can reset a password," said U.K. Information Commissioner John Edwards. "Serco Leisure did not fully consider the risks before introducing biometric technology to monitor staff attendance, prioritizing business interests over its employees' privacy." "There have been a number of warnings that facial recognition and fingerprints are problematic," said attorney Jonathan Armstrong, a partner at Cordery Compliance. "Most data protection regulators don't like technology like this when it is mandatory for employees. If you're looking at this you'll need a solid data protection impact assessment setting out why the tech is needed, why there are no better solutions, and what you're doing to minimize the impact on those affected.


Cloud providers should play by same rules as telcos, EU commissioner tells MWC

“Currently, our regulatory framework is too fragmented. We are not making the most of our single market of 450 million potential customers. We need a true digital single market to facilitate the emergence of pan-European operators with the same scale and business opportunities as their counterparts in other regions of the world. And we need a true level playing field, because in a technological space where telecommunications and cloud infrastructures converge, there is no justification for them not to play by the same rules,” said the European Commissioner. This means, for Breton, “similar rights and obligations for all actors and end-users of digital networks. This means, first and foremost, establishing the ‘country of origin’ principle for telecoms infrastructure services, as is already the case for the cloud, to reduce compliance costs and investment requirements for pan-European operators.” ... Finally, Breton advocated “Europeanizing the allocation of licenses for the use of spectrum. In the technology race to 6G, we cannot afford any more delays in the concession process, with huge disparities in the timing of auctions and infrastructure deployment between Member States...”


Unlocking the Power of Automatic Dependency Management

Dependency automation relies on having a robust and reliable CI/CD system. Integrating automatic dependency updates into the development workflow is going to exercise this system much more frequently than updates done by hand, so this process demands robust testing and continuous integration practices. Any update, while beneficial, can introduce unexpected behaviors or compatibility issues. This is where a strong CI pipeline comes into play. By automatically testing each update in a controlled environment, teams can quickly identify and address any issues. Practices like automated unit tests, integration tests and even canary deployments are invaluable. They act as a safety net, ensuring that updates improve the software without introducing new problems. Investing in these practices streamlines the update process, but also reinforces overall software quality and reliability. ... Coupled with a robust infrastructure that supports these tools, including adequate server capacity and a reliable network, organizations can create an environment where automatic dependency updates thrive, contributing to a more resilient and agile development process.


What Is a Good Management Model in Agile Software Development?

Despite that recognition, an approach referred to by Jurgen Appello as “Management 2.0,” or “doing the right thing wrong” is still being used. This management style involves a manager who sticks strictly to the organizational hierarchy and forgets that human beings usually don’t like top-down control and mandatory improvements. Within this approach, 1:1 meetings are conducted with employees for individual goal setting. Although this could be considered a good idea — to manage people and their interests — the key is the way managers do it. They should be managing the system around their people instead of managing the people directly. ... Management 3.0, or “Doing the right thing,” can be the appropriate solution, in which organizations are considered to be complex and adaptive systems. Jurgen Appelo describes this style of management as “taking care of the system instead of manipulating the people.” Or, in other words, improving the environment so that “it keeps workers engaged and happy is one of the main responsibilities of management; otherwise, the organization fails to generate value.”


Hacker group hides malware in images to target Ukrainian organizations

The attacks detected by Morphisec delivered a malware loader known as IDAT or HijackLoader that has been used in the past to deliver a variety of trojans and malware programs including Danabot, SystemBC, and RedLine Stealer. In this case, UAC-0184 used it to deploy a commercial remote access trojan (RAT) program called Remcos. “Distinguished by its modular architecture, IDAT employs unique features like code injection and execution modules, setting it apart from conventional loaders,” the Morphisec researchers said. “It employs sophisticated techniques such as dynamic loading of Windows API functions, HTTP connectivity tests, process blocklists, and syscalls to evade detection. The infection process of IDAT unfolds in multiple stages, each serving distinct functionalities.” ... To execute the hidden payload, the IDAT loader employs another technique known as module stomping, where the payload is injected into a legitimate DLL file — in this case one called PLA.dll (Performance Logs and Alerts) — to lower the chances that an endpoint security product will detect it.


“Ruthlessly prioritize what’s critical”: Check Point expert on CISOs and the evolving attack surface

Ford argues that CISOs need to face the fact that they cannot secure everything and question how they can best spend their finite resources on attack surface management. This attitude has been reflected in the rise of strategies such as zero trust and Ford says in 2024 CISOs will continue to struggle to secure an increasing number of devices and data and contend with a landscape that is evolving in real time. “I think you have to do two things really well: the first thing I think you have to do is truly identify what’s critical and ruthlessly prioritize what’s critical. The second thing is you have to deploy lasting and intelligent solutions”, Ford argued. “[Businesses] have to deploy solutions that grow and contract with the business and can grow and contract as the threat landscape grows and contracts.” Mitchelson offers some examples of what this sort of deployment might look like in the future, arguing the most potential lies in using technology to realize this elastic functionality. “Internally within the structures of the organization, it could be a matrix type structure whereby you’re actually able to expand and contract internal resourcing within teams as to what you do”, Mitchelson suggests.


Gartner Identifies the Top Cybersecurity Trends for 2024

Security leaders need to prepare for the swift evolution of GenAI, as large language model (LLM) applications like ChatGPT and Gemini are only the start of its disruption. Simultaneously, these leaders are inundated with promises of productivity increases, skills gap reductions and other new benefits for cybersecurity. Gartner recommends using GenAI through proactive collaboration with business stakeholders to support the foundations for the ethical, safe and secure use of this disruptive technology. “It’s important to recognize that this is only the beginning of GenAI’s evolution, with many of the demos we’ve seen in security operations and application security showing real promise,” said ... Outcome-driven metrics (ODMs) are increasingly being adopted to enable stakeholders to draw a straight line between cybersecurity investment and the delivered protection levels it generates. According to Gartner, ODMs are central to creating a defensible cybersecurity investment strategy, reflecting agreed protection levels with powerful properties, and in simple language that is explainable to non-IT executives. 


Using AI to reduce false positives in secrets scanners

Secrets scanners were created to find leaks of such secrets before they reach malicious hands. They work by comparing the source code against predefined rules (regexes) that cover a wide range of secret types. Because they are rule-based, secrets scanners often trade between high false-positive rates on the one hand and low true-positive rates on the other. The inclination towards relaxed rules to capture more potential secrets results in frequent false positives, leading to alert fatigue among those tasked with addressing these alarms. Some scanners implement additional rule-based filters to decrease false alerts, like checking if the secret resides in a test file or whether it looks like a code variable, function call, CSS selection, etc., through semantic analysis. ... AI can play a role in overcoming this challenge. Large Language Model (LLM) can be directed at vast amounts of code and fine-tuned (trained) to understand the nuance of secrets and when they should be considered false-positive. Given a secret and the context in which it was introduced, this model would then know whether it should be flagged. Using this approach will reduce the number of false positives while keeping true positive rates stable.



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - February 26, 2024

From deepfakes to digital candidates: AI’s political play

Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content. While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. ... Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. 


How data governance must evolve to meet the generative AI challenge

“With generative AI bringing more data complexity, organizations must have good data governance and privacy policies in place to manage and secure the content used to train these models,” says Kris Lahiri, co-founder and chief security officer of Egnyte. “Organizations must pay extra attention to what data is used with these AI tools, whether third parties like OpenAI, PaLM, or an internal LLM that the company may use in-house.” Review genAI policies around privacy, data protection, and acceptable use. Many organizations require submitting requests and approvals from data owners before using data sets for genAI use cases. Consult with risk, compliance, and legal functions before using data sets that must meet GDPR, CCPA, PCI, HIPAA, or other data compliance standards. Data policies must also consider the data supply chain and responsibilities when working with third-party data sources. “Should a security incident occur involving data that is protected within a certain region, vendors need to be clear on both theirs and their customers’ responsibilities to properly mitigate it, especially if this data is meant to be used in AI/ML platforms” says Jozef de Vries, chief product engineering officer of EDB.


Will AI Replace Consultants? Here’s What Business Owners Say.

“Most consultants aren’t actually that smart," said Michael Greenberg of Modern Industrialists. “They’re just smarter than the average person.” But he reckons the average machine is much smarter. “Consultants generally do non-creative tasks based around systematic analysis, which is yet another thing machines are normally better at than humans.” Greenberg believes some consultants, “doing design or user experience, will survive,” but “the run of the mill accounting degree turned business advisor will not.” Someone who has “replaced all of [her] consultants with ChatGPT already, and experienced faster growth,” is Isabella Bedoya, founder of MarketingPros.ai. However, she thinks because “most people don't know how to use AI, savvy consultants need to leverage it to become even more powerful, effective and efficient for their clients” and stay ahead of their game. Heather Murray, director at Beesting Digital, thinks the inevitable replacement of consultants is down to quality. “There are so many poor quality consultants that rely rigidly on working their clients through set frameworks, regardless of the individual’s needs. AI could do that easily.” 


Effective Code Documentation for Data Science Projects

The first step to effective code documentation is ensuring it’s clear and concise. Remember, the goal here is to make your code understandable to others – and that doesn’t just mean other data scientists or developers. Non-technical stakeholders, project managers, and even clients may need to understand what your code does and why it works the way it does. To achieve this, you should aim to use plain language whenever possible. Avoid jargon and overly complex sentences. Instead, focus on explaining what each part of your code does, why you made the choices you did, and what the expected outcomes are. If there are any assumptions, dependencies, or prerequisites for your code, these should be clearly stated. Remember, brevity is just as important as clarity. ... Data science projects are often dynamic, with models and data evolving over time. This means that your code documentation needs to be equally dynamic. Keeping your documentation up to date is critical to ensuring its usefulness and accuracy. A good practice here is to treat your documentation as part of your code, updating it as you modify or add to your code base.


Breaking down the language barrier: How to master the art of communication

Exactly how can cyber professionals go about improving their communication skills? According to Shapely, many people prefer to take short online learning courses. On-the-job coaching or mentorships are other popular upskilling strategies, providing quick and cost-effective practical learning opportunities. For those still early in their cybersecurity career, there is the option of building communication skills as part of a university degree. According to Kudrati, who teaches part-time at La Trobe University, many cybersecurity students must complete one subject on professional skills as part of their course. “This helps train students’ presentation skills, requiring them to present in front of lecturers and classmates as if they’re customers or business teams,” he says. Homing in on communication skills at university or early on in a cybersecurity professional’s career is also encouraged by Pearlson. In a study she conducted into the skills of cybersecurity professionals, she found that while communication skills were in demand, they were lacking, particularly among those in entry roles. 


4 core AI principles that fuel transformation success

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects. One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience that’s critical to success when trying to build and implement AI solutions to problems. Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if they’re perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.


Don’t Get Left Behind: How to Adopt Data-Driven Principles

Culture change remains the biggest hurdle to data-driven transformation. The disruption inherent in this evolution can put off some key stakeholders, but a few common-sense steps can guide your organization to tackle it successfully. Read the room - Executive buy-in is crucial to building a data-driven culture. Leadership must get behind the move so the rank-and-file will dedicate the time and effort needed to make the pivot. Map the landscape - You can’t change what you don’t know. Start by assessing the state of the organization: find the gaps in the existing data infrastructure and forecast any future analytics needs so you can plan for them. Evaluate your options - Building business intelligence (BI) and artificial intelligence (AI) systems from scratch is labor- and resource-intensive. ... However, there’s no need to reinvent the wheel; consider leveraging managed services to deal with scale and adaptation issues and ask for guidance from your provider’s data architects and scientists. Think single-source - Fragmentation detracts from the usefulness of data and can mask insights that would be available with better visibility. Implement integrated platforms that provide secure and scalable data pipelines, storage, and insights from end to end.


It’s time for security operations to ditch Excel

Microsoft Excel and Google Sheets are excellent for balancing books and managing cybersecurity budgets. However, they’re less ideal for tackling actual security issues, auditing, tracking, patching, and mapping asset inventories. Surely, our crown jewels deserve better. And yet, security operation teams are drowning in multi-tab tomes that require constant manual upkeep. Using these spreadsheets requires security operations to chase down every team in their organization for input on everything from the mapping of exceptions and end-of-life of machines to tracking hardware and operating systems. This is the only way to gather the information required on when, why and how certain security issues or tasks must be addressed. It’s no wonder, then, that the column reserved for due dates is usually mostly red. This is an industry-wide problem plaguing even multinational enterprises with top CISOs. Even those large enough to have GRC teams still use Excel for upcoming audits to verify remediations, delegate responsibilities and keep track of compliance certifications.


How Leadership Missteps Can Derail Your Cloud Strategy

Cloud computing involves many moving parts working in unison; therefore, leadership must be clear and concise regarding their cloud strategies. Yet often they are not. The problems arise from not acknowledging the complexity inherent in moving to the cloud. It's not a simple plug-and-play transition, but one that requires modifications not only to technology but also to business processes and organizational culture. For these reasons, the scope of the project is easily underestimated. Underestimating the complexity of transitioning to cloud computing can lead to significant pitfalls. Inadequate staff training, lax security measures, and rushed vendor choices together are just the tip of the iceberg. These oversights, seemingly minor at first, can snowball into significant issues down the line. But there's another layer: the iceberg beneath the surface. Focusing merely on the initial outlay while overlooking ongoing operational costs is like ignoring the currents below, both can unexpectedly steer your budget -- and your company -- off course. Acknowledging and managing operational expenses is vital for a thorough and financially stable cloud computing strategy.


The Art of Ethical Hacking: Securing Systems in the Digital Age

Stressing the obvious differences between malicious hacking and ethical hacking is vital. Even though the strategies utilized could be comparative, ethical hacking is carried out with permission and aims to strengthen security. On the other hand, malicious hacking entails unlawful admittance to steal, disrupt, or manipulate data without authorization. Operating within moral and legal bounds, ethical hackers make sure that their acts advance cybersecurity measures as a whole. Ethical hacking is the term used to describe a legitimate attempt to obtain unauthorized access to a computer system, program, or information. Ethical hacking includes imitating the methods and actions of vicious attackers. By using this method, security vulnerabilities can be found and fixed before a malicious attack can make use of them. ... As everybody and organizations keep on depending on technology for everyday tasks and business operations, the role of ethical hacking in strengthening cybersecurity will only become more crucial. A safe digital environment can be the difference between one that is susceptible to potentially catastrophic cyberattacks and one that embraces ethical hacking as a proactive strategy. 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden