Showing posts with label ethical hacking. Show all posts
Showing posts with label ethical hacking. Show all posts

Daily Tech Digest - November 25, 2024

GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus. ... Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need. ... GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any program­ming language), and the other is to more deeply review all your changes. Deep reviews can use custom coding guidelines.


Closed loop optimisation: Opening a world of advantages for marketers

In marketing, closed loop optimisation refers to the collection and analysis of various data across the marketing lifecycle or customer journey to create a continuous cycle of learning and data-led decision-making. By closing the customer journey loop, starting with the first interaction all the way to “post-sale”, brand marketers can evaluate the effectiveness of advertising campaigns and channels, and deploy their resources in initiatives that deliver the best outcomes. ... With advanced analytics solutions, marketing organisations can process structured and unstructured data from internal and external sources to identify emerging trends, customer needs and behaviours, and other metrics that can inform brand strategies. When a health technology company understood with the help of analytics that user-generated content was a key factor in strengthening interactions with customers, it changed the content strategy to include user feedback, and thereby fostered a sense of community, improved credibility, and elevated the brand experience to substantially increase social media engagement within eighteen months. A top U.S. professional basketball team used predictive analytics to uncover new trends and understand the type of content that would resonate best with fans around the world.


The rise of autonomous enterprises: how robotics, AI, and automation are reshaping the workforce of tomorrow

An autonomous enterprise is an organisation that has successfully implemented the best application of automation technologies to function with minimal human intervention in most aspects. From routine administrative tasks to complex decision-making processes, autonomous enterprises leverage AI, ML, and RPA to drive efficiency, accuracy, and agility. Companies across sectors such as manufacturing, healthcare, logistics, and more, are looking towards automation to streamline operations, reduce costs, and innovate. ... As human-machine collaboration grows, there is an increasing need for employers and educational institutions to address reskilling and upskilling to prepare the workforce in continuously changing labour markets. This does not mean this work will eliminate human jobs but will definitely require more creativity, critical thinking, and emotional intelligence among human employees—the very qualities AI cannot encapsulate. ... As Robotics and AI continue to revolutionise the world the ethical and governance challenges arising from it have to be responded, proactively and thoughtfully. Privacy, bias, and accountability issues have to be strongly addressed so that these technologies are developed and deployed appropriately. 


Overcoming legal and organizational challenges in ethical hacking

A professional ethical hacker must have a broad understanding of various IT systems, networking, and protocols – essentially, a deep “under the hood” knowledge. This foundational expertise allows them to navigate different environments effectively. Additionally, target-specific knowledge is crucial, as the security measures and vulnerabilities can vary significantly based on the technology stack in use. ... AI and machine learning can significantly enhance ethical hacking efforts. On the offensive side, automated processes supported by AI can efficiently identify vulnerabilities and suggest areas for further manual security testing. This streamlines the initial phases of penetration testing and helps uncover potential issues more effectively. Additionally, AI can assist in generating detailed penetration testing reports, saving time and ensuring accuracy. On the defensive side, AI and machine learning are invaluable for detecting anomalies and correlating data to identify potential threats. These technologies enable a proactive approach to cybersecurity, enhancing both offensive and defensive strategies. By using AI and machine learning, ethical hackers can improve their effectiveness. 


Why The Gig Economy Is A Key Target For API Attacks

One of the most difficult attacks to prevent is business logic abuse. Strictly speaking, it isn’t an attack at all. Business logic abuse sees the functionality of the API used against it, so that a task it is supposed to execute is then used to carry out an attack. It might be use to subvert access control, for instance, with attackers manipulating URLs, session tokens, cookies, or hidden fields to gain advanced privileges and access sensitive data or functionality. Or bots may attempt to repeatedly sign up, login, or execute purchases in order to validate credentials, access unauthorised data, or commit fraud. Perhaps flaws in session tokens or poor handling of session data allows the attacker to hijack sessions and escalate privileges. Or the attacker may try to bypass built-in constraints to business logic by reviewing points of entry such as form fields and coming up with inputs that the developers may not have planned for. ... Legacy app defences rely on embedding javascript code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. Some of this code, such as CAPTCHAs, also introduces customer friction. 


From Contractors to OAuth: Emerging SDLC Threats for 2025

Outsourcing software development is common practice but opens the door to significant security risks when not properly managed. These outsourced operations lack the same stringent security measures applied to internal teams, creating blind spots that attackers can easily leverage. A common vulnerability in this scenario is the over-provisioning of access rights. ... Poorly configured CI/CD pipelines are another critical weakness. When organizations outsource software development, they often have little visibility into the security practices of their contractors’ environments. Attackers can exploit poorly configured pipelines to access source code or manipulate software delivery processes. ... Preventing OAuth phishing can be difficult because it exploits user behavior rather than traditional technical vulnerabilities. While phishing training is essential, the best defense is limiting the damage attackers can cause if they gain access. By restricting developer entitlements to only what is necessary for their role, organizations can reduce the impact of a compromised account and prevent broader system breaches. ... The most catastrophic SDLC security breaches in 2025 may not stem from technical vulnerabilities but from poorly managed development teams.


In a Growing Threat Landscape, Companies Must do Three Things to Get Serious About Cybersecurity

From a practical standpoint, execs and the board make budget decisions about every domain, including security. Unlike other domains, cybersecurity isn’t a profit center for most businesses, so it often gets underfunded compared to business units and projects that generate revenue. That’s a problem. If executives understand how much is at stake from a fundamental business level, they will invest in bolstering their cybersecurity posture. Cybersecurity is essential to protecting profit centers and enabling them to safely grow. And more and more, customers are looking at a company’s security bonafide when making their buying decisions. It’s in the execs’ self-interest to take charge in adopting a cybersecurity posture as they will ultimately be held accountable in the event of catastrophe. ... It’s also essential to have an honest, objective CISO at the helm of cybersecurity who has power at the executive table. The C-suite and board won’t ever know how to effectively prioritize security unless they have a CISO guiding them accordingly. Communication is central here. There has to be open discussion between the CISO and the rest of the C-suite regularly. 


Perimeter Security Is at the Forefront of Industry 4.0 Revolution

Perimeter security is crucial for military, government organizations and other business enterprises alike to detect potential threats, deter the possible intruders, and delay the illegal attempts which the intruders make while breaching in a secured area/perimeter. Additionally, perimeter security maintains the operational continuity within these organizations. To prevent unauthorized entry to the premises, high-security associations, commercial centers, government facilities and other organizations can establish a physical barrier utilizing detection and deterrence techniques.... The effectiveness of the perimeter security system depends upon several factors such as design and implementation of the security measures, proper integration of physical and electronic devices and expertise of a well-trained personnel. A well-designed perimeter security system should provide a comprehensive coverage of any building/premise with multiple layers of security which can be effective against intruders/thieves in creating obstacles. Regular maintenance and testing of the perimeter security system is necessary to ensure their continued efficiency. It is critical to continuously assess and expand perimeter security measures in order to counter different types of threats and hazards.


5 Trends Reshaping the Data Landscape

Before companies can successfully leverage AI and advanced analytics, it’s urgent to address the “runaway data movement and data pipeline challenges that are so common in enterprises,” he pointed out. “When you think about data movement and data pipelines, most customers have transactional systems or legacy environments that then feed data to downstream systems. Or they’re getting a firehose of data from a variety of sources that are coming from the cloud, and they can be batch or streaming data.” What happens is these organizations “take that data and transform or consume it by multiple business units using their own extract, transform, and load (ETL) solutions,” he illustrated. “They can be completely different types of data. This is typically the first kind of deviation or loss of a unified source of truth for the data.” The ETL solutions that each group manages “have their own user acceptance testing or production environments, which means more copies of data,” he pointed out. “Then that data is fed to multiple systems, maybe for dashboarding or for more low-latency analytics. But it’s also fed to their systems, like OLAP systems or data lakes.” If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said.


Top challenges holding back CISOs’ agendas

With limited resources and an ever-growing list of threats, CISOs are often caught managing multiple projects at once. Some of these might move forward bit by bit, but without clear milestones or measurable progress, it’s difficult to show their real impact. This makes it harder for CISOs to secure extra funding or support, especially when stakeholders can’t see solid, tangible results. “That makes it almost impossible to show meaningful success,” says John Terrill, CSO at Phosphorus. “A lot of times, this can come from trying to boil the ocean.” Many CISOs recommend learning to “speak business” and occasionally scaring the board to get more funding, but these can only go so far. “The company has a finite amount of resources; you need to make peace with that,” Avivi says. ... “Aligning both the workforce and the organization’s leadership around risk appetite helps tremendously to focus your energy and your dollars in the places that most need them,” says Ken Deitz, CISO at Secureworks. “If an organization has a stated risk appetite for security risk, the priorities start to jump off the page.” CISOs should be open about the risk the organization will take if their priorities are not addressed. 



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 18, 2024

3 leadership lessons we can learn from ethical hackers

By nature, hackers possess a knack for looking beyond the obvious to find what’s hidden. They leverage their ingenuity and resourcefulness to address threats and anticipate future risks. And most importantly, they are unafraid to break things to make them better. Likewise, when leading an organization, you are often faced with problems that, from the outside, look unsurmountable. You must handle challenges that threaten your internal culture or your product roadmap, and it’s up to you to decide the right path toward progress. Now is the most critical time to find those hidden opportunities to strengthen your organization and remain fearless in your decisions toward a stronger path. ... Leaders must remove ego and cultivate open communication within their organizations. At HackerOne, we build accountability through company-wide weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask tough questions about the business, and encourage employees to share their perspectives openly without fear of retaliation. ... Most hackers are self-taught enthusiasts. Young and without formal cybersecurity training, they are driven by a passion for their craft. Internal drive propels them to continue their search for what others miss. If there is a way to see the gaps, they will find them. 


So, you don’t have a chief information security officer? 9 signs your company needs one

The cost to hire and retain a CISO is a major stumbling block for some organizations. Even promoting someone from within to a newly created CISO post can be expensive: total compensation for a full-time CISO in the US now averages $565,000 per year, not including other costs that often come with filling the position. ... Running cybersecurity on top of their own duties can be a tricky balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a lot of objectives or goals that don’t relate to security, and those sometimes conflict with one another. Security oftentimes can be at odds with certain productivity goals. But both of those (roles) should be aimed at advancing the success of the organization,” Smith says. ... A virtual CISO is one option for companies seeking to bolster cybersecurity without a full-time CISO. Black says this approach could make sense for companies trying to lighten the load of their overburdened CIO or CTO, as well as firms lacking the size, budget, or complexity to justify a permanent CISO. ... Not having a CISO in place could cost your company business with existing clients or prospective customers who operate in regulated sectors, expect their partners or suppliers to have a rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including real-time data analysis, predictive modeling, and autonomous decision-making, available to a much wider group of people in any organization. That, in turn, gives companies a way to harness the full potential of their data. Simply put, AI agents are rapidly becoming essential tools for business managers and data analysts in industrial businesses, including those in chemical production, manufacturing, energy sectors, and more. ... In the chemical industry, AI agents can monitor and control chemical processes in real time, minimizing risks associated with equipment failures, leaks, or hazardous reactions. By analyzing data from sensors and operational equipment, AI agents can predict potential failures and recommend preventive maintenance actions. This reduces downtime, improves safety, and enhances overall production efficiency. ... AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries. For business managers and data analysts, the key takeaway is clear: AI agents are not just a future possibility—they are a present necessity, capable of driving efficiency, innovation, and growth in today’s competitive industrial environment.


Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes

A healthier approach to app modernization is to focus on modernizing your processes. Despite momentous changes in application deployment technology over the past decade or two, the development processes that best drive software innovation and efficiency — like the interrelated concepts and practices of agile, continuous integration/continuous delivery (CI/CD) and DevOps — have remained more or less the same. This is why modernizing your application delivery processes to take advantage of the most innovative techniques should be every business’s real focus. When your processes are modern, your ability to leverage modern technology and update apps quickly to take advantage of new technology follows naturally. ... In addition to modifying processes themselves, app modernization should also involve the goal of changing the way organizations think about processes in general. By this, I mean pushing developers, IT admins and managers to turn to automation by default when implementing processes. This might seem unnecessary because plenty of IT professionals today talk about the importance of automation. Yet, when it comes to implementing processes, they tend to lean toward manual approaches because they are faster and simpler to implement initially. 


The ‘Great IT Rebrand’: Restructuring IT for business success

To champion his reimagined vision for IT, BBNI’s Nester stresses the art of effective communication and the importance of a solid marketing campaign. In partnership with corporate communications, Nester established the Techniculture brand and lineup of related events specifically designed to align technology, business, and culture in support of enterprise goals. Quarterly Techniculture town hall meetings anchored by both business and technology leaders keep the several hundred Technology Solutions team members abreast of business priorities and familiar with the firm’s money-making mechanics, including a window into how technology helps achieve specific revenue goals, Nester explains. “It’s a can’t-miss event and our largest team engagement — even more so than the CEO videos,” he contends. The next pillar of the Techniculture foundation is Techniculture Live, an annual leadership summit. One third of the Technology Solutions Group, about 250 teammates by Nester’s estimates, participate in the event, which is not a deep dive into the latest technologies, but rather spotlights business performance and technology initiatives that have been most impactful to achieving corporate goals.


The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success

DSPM is a data-focused approach to securing the cloud environment. By addressing cloud security from the angle of discovering sensitive data, DSPM is centered on protecting an organization’s valuable data. This approach helps organizations discover, classify, and protect data across all platforms, including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding vulnerabilities and risks for teams to remediate across the cloud environment, DSPM “gives security teams visibility into where cloud data is stored” and detects risks to that data. Security misconfigurations and vulnerabilities that may result in the exposure of data can be flagged by DSPM solutions for remediation, helping to protect an organization’s most sensitive resources. Beyond simply discovering sensitive data, DSPM solutions also address many questions of data access and governance. They provide insight into not only where sensitive data is located, but which users have access to it, how it is used, and the security posture of the data store. ... Every organization undoubtedly has valuable and sensitive enterprise, customer, and employee data that must be protected against a wide range of threats. Organizations can reap a great deal of benefits from DSPM in protecting data that is not stored on-premises.


The hidden challenges of AI development no one talks about

Currently, AI developers spend too much of their time (up to 75%) with the "tooling" they need to build applications. Unless they have the technology to spend less time tooling, these companies won't be able to scale their AI applications. To add to technical challenges, nearly every AI startup is reliant on NVIDIA GPU compute to train and run their AI models, especially at scale. Developing a good relationship with hardware suppliers or cloud providers like Paperspace can help startups, but the cost of purchasing or renting these machines quickly becomes the largest expense any smaller company will run into. Additionally, there is currently a battle to hire and keep AI talent. We've seen recently how companies like OpenAI are trying to poach talent from other heavy hitters like Google, which makes the process for attracting talent at smaller companies much more difficult. ... Training a Deep Learning model is almost always extremely expensive. This is a result of the combined function of resource costs for the hardware itself, data collection, and employees. In order to ameliorate this issue facing the industry's newest players, we aim to achieve several goals for our users: Creating an easy-to-use environment, introducing an inherent replicability across our products, and providing access at as low costs as possible.


Transforming code scanning and threat detection with GenAI

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. ... If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile? Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. ... AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.


5 Tips for Optimizing Multi-Region Cloud Configurations

Multi-region cloud configurations get very complicated very quickly, especially for active-active environments where you’re replicating data constantly. Containerized microservice-based applications allow for faster startup times, but they also drive up the number of resources you’ll need. Even active-passive environments for cold backup-and-restore use cases are resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and more to achieve a reasonable disaster recovery turnaround time. ... The CAP theorem forces you to choose only two of the three options: consistency, availability, and partition tolerance. Since we’re configuring for multi-region, partition tolerance is non-negotiable, which leaves a battle between availability and consistency. Yes, you can hold onto both, but you’ll drive high costs and an outsized management burden. If you’re running active-passive environments, opt for consistency over availability. This allows you to use Platform-as-a-Service (PaaS) solutions to replicate your database to your passive region. ... For active-passive environments, routing isn’t a serious concern. You’ll use default priority global routing to support failover handling, end of story. But for active-active environments, you’ll want different routing policies depending on the situation in that region.


Why API-First Matters in an AI-Driven World

Implementing an API-first approach at scale is a nontrivial exercise. The fundamental reason for this is that API-first involves “people.” It’s central to the methodology that APIs are embraced as socio-technical assets, and therefore, it requires a change in how “people,” both technical and non-technical, work and collaborate. There are some common objections to adopting API-First within organizations that raise their head, as well as some newer framings, given the eagerness of many to participate in the AI-hyped landscape. ... Don’t try to design for all eventualities. Instead, follow good extensibility patterns that enable future evolution and design “just enough” of the API based on current needs. There are added benefits when you combine this tactic with API specifications, as you can get fast feedback loops on that design before any investments are made in writing code or creating test suites. ... An API-First approach is powerful precisely because it starts with a use-case-oriented mindset, thinking about the problem being solved and how best to present data that aligns with that solution. By exposing data thoughtfully through APIs, companies can encapsulate domain-specific knowledge, apply business logic, and ensure that data is served securely, self-service, and tailored to business needs. 



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - September 23, 2024

Clear as mud: global rules around AI are starting to take shape but remain a little fuzzy

There is some subjectivity within the EU efforts, as “high risk” is defined as able to cause harm to society, which could receive wildly different interpretations. That said, the effort comes from the right place, which is to protect and ensure the “fundamental rights of EU citizens.” The EU Council views the act as designed to stimulate investment and innovation, while at the same time, carving out exceptions for “military and defense as well as research purposes.” This perspective is not much different from the one the industry offered up in 2022 before the US Senate during discussions on the challenges of security, cybersecurity in the age of AI. At that hearing, two years ago, the Senate was urged not to stifle innovation as adversaries and economic competitors in other nations were not going to be slowing down their innovation. ... When I asked Price for his thoughts on the US position around global AI that many nations should work together to ensure safety without hampering evolution, he agreed that “security considerations must remain at the forefront of these discussions to ensure that widespread AI adoption does not inadvertently amplify cybersecurity risks.”


Turning Compliance Into Strategy: 4 Tips For Navigating AI Regulation

For Chief Strategy Officers (CSOs), helping their organizations to understand and adapt to AI regulation is essential. CSOs can play a key role in guiding their organizations to turn compliance into strategy ... Establish effective governance frameworks that align with the AI Act’s requirements. This framework should include clear policies on data usage, transparency, accountability and ethical AI practices, as well as implementing AI-driven technologies, to help manage risks. Additionally, developing a governance structure that includes roles and responsibilities for AI oversight, and working with operational leaders to embed governance practices into day-to-day business operations can support the company’s long-term success and ethical reputation. ... Companies that form strategic partnerships are better positioned to stay competitive in the market, helping them navigate regulations like the AI Act. By combining the unique strengths of each partner, business leaders can develop more robust and scalable solutions that are better equipped to handle the nuances of regulations. ... The EU AI Act marks a significant shift in the regulatory landscape, challenging businesses to rethink how they develop and deploy AI technologies. 


‘Harvest now, decrypt later’: Why hackers are waiting for quantum computing

The “harvest now, decrypt later” phenomenon in cyberattacks — where attackers steal encrypted information in the hopes they will eventually be able to decrypt it — is becoming common. As quantum computing technology develops, it will only grow more prevalent. ... The average hacker will not be able to get a quantum computer for years — maybe even decades — because they are incredibly costly, resource-intensive, sensitive and prone to errors if they are not kept in ideal conditions. To clarify, these sensitive machines must stay just above absolute zero (459 degrees Fahrenheit to be exact) because thermal noise can interfere with their operations. However, quantum computing technology is advancing daily. Researchers are trying to make these computers smaller, easier to use and more reliable. Soon, they may become accessible enough that the average person can own one. ... The Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) soon plan to release post-quantum cryptographic standards. The agencies are leveraging the latest techniques to make ciphers quantum computers cannot crack. 


AI-driven demand forecasting ensures we’re ‘game-ready’ by predicting user behaviour and traffic

At Dream Sports, AI and machine learning are central to enhancing user experiences, optimising predictions, and securing our platform. AI-driven demand forecasting ensures we’re “game-ready” by predicting user behaviour and traffic for smooth gameplay during peak times. With over 250 million users, our ML systems safeguard platform integrity, detecting and preventing violations to ensure fair play. We also leverage ML to personalise user experiences, optimise rewards programs, and use causal inference for data-driven decisions across game recommendations and contest management. Generative AI initiatives include developing an AI Coach and enhancing user verification and customer success systems. Our collaboration with Columbia University’s Dream Sports AI Innovation Centre advances AI/ML applications in sports, focusing on predictive modelling, fan engagement, and sports tech optimisation. This partnership, alongside internal initiatives, helps us lead in reshaping sports technology with more immersive, personalised experiences through the rise of generative AI.


5 things your board needs to know about innovation and thought leadership

The most successful organizations have a programmatic approach to managing innovation and thought leadership, which helps them build organizational competency over time in both disciplines. How it’s structured is less important since it can be centralized, decentralized, or hybrid, but having a defined program with a mission, vision, strategy, and operating plan at a minimum is critical. As an example, the US Navy set a vision for 2030 related to the future of naval information warfare, creating a Hollywood-produced video, which became a north star for the organization, unlocking millions in funding for AI. The focus and types of innovation and thought leadership you pursue are important, too. In addition to an internal and client-facing focus, have a known set of innovation enablers you plan to pursue such as data and analytics, automation, adaptability, cloud, digital twins and AI, but be open to adding others as needed. The same is true for your editorial calendar for thought leadership and the topics you plan to address. And hear out new thought leadership topics that may come from left field, which could benefit customers. In addition, keep the board appraised on your multi-year innovation journey, goals and objectives. 


Cloud Security Risk Prioritization is Broken. Here’s How to Fix It.

Business context is critical. It’s easy to understand, for example, a CVE in a payment application is a high priority. Whereas, the same CVE in a search application is low priority. Security programs must also take this into account. Effective security paradigms understand which detected vulnerabilities have the greatest business impact, so security teams aren’t spending time prioritizing lower-risk vulnerabilities. Traditional security applications run tests on code before it’s pushed. While this pre-production testing is still a best practice, it misses how code interacts with the environmental variables, configurations, and sensitive data it will coexist with once deployed. This insight is essential when you’re working to understand how a cloud-native application will function when live. Technologies such as application security posture management (ASPM) facilitate a more proactive approach by automating security review processes in production and creating a live view of an application, its vulnerabilities, and business risks. ASPM provides visibility into what’s happening in the cloud, giving security teams a better understanding of application behavior and attack surfaces so they can prioritize appropriately. 


A Look Inside the World of Ethical Hacking to Benefit Security

While there can be many different siloes and areas of focus within the ethical hacking community, enterprises tend to interact with these experts in a few different ways. Penetration testing is a common connection between enterprises and ethical hackers, often one driven by compliance requirements. Larger, more mature organizations may employ penetration testers internally in addition to contracting with third parties. While many organizations rely solely on third parties. Enterprises may also engage ethical hackers to participate in red teaming exercises, simulations of real-world attacks. Typically, these exercises have a specific objective, and ethical hackers are free to use whatever means available to achieve that objective. Hannan offers a physical security assessment as an example of a red teaming exercise. “Walk into a building, find an unlocked computer, and plug a USB device into the computer,” he details. “That might be one of your objectives. How do you get into the building? Do you impersonate a delivery person? Do you impersonate an HVAC person? Do you just show up in a yellow vest and a hard hat and walk into the building? That's left up to you.”


Offensive cyber operations are more than just attacks

AI is already transforming offensive cyber operations by expanding data visibility and streamlining threat intelligence, which are critical for both defensive and offensive purposes. AI enables faster decision-making and the ability to predict and respond to threats more effectively. However, it also empowers adversaries, allowing for more sophisticated attacks which could include generating deepfakes, designing advanced malware, and spreading misinformation at an unprecedented scale on social media platforms. Quantum computing, while still in its early stages, poses a significant long-term challenge. Its potential to break current encryption methods could render many of today’s cybersecurity practices obsolete, creating new vulnerabilities for exploitation. ... A key limitation is time. Once a threat is identified, the race to harden systems and close vulnerabilities begins. The longer it takes to respond, the more risk organizations face. As threats become more sophisticated, defenders must continuously adapt and anticipate new methods of attack, making speed, agility, and proactive defense critical factors in minimizing exposure and mitigating risk.


Quantum Risks Pose New Threats for US Federal Cybersecurity

Adversaries including China are investing heavily in quantum computing in an apparent effort to outpace the United States, where bureaucratic red tape and unforeseen costs could significantly hinder federal efforts to keep up. "Upgrading this infrastructure isn’t going to be quick or cheap," said Georgianna Shea, chief technologist of the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation. Testing for quantum-resistant encryption could reveal compatibility issues with legacy systems, such as increased power demands, reduced performance, larger key sizes and the need to adjust existing protocols and application stacks for keys and digital signatures, she told Information Security Media Group. The Foundation for Defense of Democracies is set to release new guidance for CIOs on Monday that will aim to lay out a road map for quantum readiness. The report is structured as a six-point plan that includes designating a leader, taking inventory of all encryption systems, prioritizing based on risk, understanding mitigation strategies, developing a transition plan and regularly monitoring and adjusting it as needed.


The Rise of Generative AI Fuels Focus on Data Quality

Traditionally, data quality initiatives have often been isolated efforts, disconnected from core business goals and strategic initiatives. Some data quality initiatives are compliance-focused, data cleaning, or departmental efforts — all are very important but not directly tied to larger business goals. This makes it difficult to quantify the impact of data quality improvements and secure the necessary investment. As a result, data quality struggles to gain the crucial attention it deserves. However, the rise of GenAI presents a game-changer for enterprises. GenAI apps rely heavily on high-quality data to generate accurate and reliable results. ... Organizations need a new way to organize the data and make it GenAI-ready, making sure it is continuously synced with the source systems, continuously cleansed according to a company's data quality policies, and continuously protected. But the solution extends beyond technology. Organizations must prioritize data quality by establishing key performance indicators (KPIs) directly linked to GenAI success, such as customer satisfaction, resolution rate, and response time.



Quote for the day:

“If you want to make a permanent change, stop focusing on the size of your problems and start focusing on the size of you!” -- T. Harv Eker

Daily Tech Digest - September 15, 2024

Data Lakes Evolve: Divisive Architecture Fuels New Era of AI Analytics

“Data lakes led to the spectacular failure of big data. You couldn’t find anything when they first came out,” Sanjeev Mohan, principal at the SanjMo tech consultancy, told Data Center Knowledge. There was no governance or security, he said. What was needed were guardrails, Mohan explained. That meant safeguarding data from unauthorized access and respecting governance standards such as GDPR. It meant applying metadata techniques to identify data. “The main need is security. That calls for fine-grained access control – not just throwing files into a data lake,” he said, adding that better data lake approaches can now address this issue. Now, different personas in an organization are reflected in different permissions settings. ... This type of control was not standard with early data lakes, which were primarily “append-only” systems that were difficult to update. New table formats changed this. Table formats like Delta Lake, Iceberg, and Hudi have emerged in recent years, introducing significant improvements in data update support. For his part, Sanjeev Mohan said standardization and wide availability of tools like Iceberg give end-users more leverage when selecting systems. 


Data at the Heart of Digital Transformation: IATA's Story

It's always good to know what the business goals are, from a strategic perspective, which informs the data that is needed to enable digital transformation. Data is at the heart of digital transformation. Business strategy comes first and then data strategy, followed by technology strategy. At IATA, we formed the Data Steering Group and identified critical datasets across the organization. We then set up a data catalog and established a governance structure. This was followed by the launch of the Data Governance Committee and the role of a chief data officer. We're going to be implementing an automated data catalog and some automation tools around data quality. Data governance has allowed us to break down data silos. It has also enabled us to establish IATA's industry data strategy. We treat data as an asset, and that data is not owned by any particular division but looked at holistically at the organizational level. And that has allowed us opportunities to do some exciting things in the AI and analytics space and even in the way we deal with our third-party data suppliers and member airlines.


New Android Warning As Hackers Install Backdoor On 1.3 Million TV Boxes

"This is a clear example of how IoT devices can be exploited by malicious actors,” Ray Kelly, fellow at the Synopsys Software Integrity Group, said, “the ability of the malware to download arbitrary apps opens the door to a range of potential threats.” Everything from a TV box botnet for use in distributed denial of service attacks through to stealing account credentials and personal information. Responsibility for protecting users lies with the manufacturers, Kelly said, they must “ensure their products are thoroughly tested for security vulnerabilities and receive regular software updates.” "These off-brand devices discovered to be infected were not Play Protect certified Android devices,” a Google spokesperson said, “If a device isn't Play Protect certified, Google doesn’t have a record of security and compatibility test results.” Whereas these Play Protect certified devices have undergone testing to ensure both quality and user safety, other boxes may not have done. “To help you confirm whether or not a device is built with Android TV OS and Play Protect certified, our Android TV website provides the most up-to-date list of partners,” the spokesperson said.


Engineers Day: Top 5 AI-powered roles every engineering graduate should consider

Generative AI engineer: They play a pivotal role in analysing vast datasets to extract actionable insights and drive data-informed decision-making processes. This role demands a comprehensive understanding of statistical analysis, machine learning techniques, and programming languages such as Python and R. ... AI research scientist: They are at the forefront of advancing AI technologies through groundbreaking research and innovation. With a robust mathematical background, professionals in this role delve into programming languages such as Python and C++, harnessing the power of deep learning, natural language processing, and computer vision to develop cutting-edge solutions. ... Machine Learning engineer: Machine learning engineers are tasked with developing cutting-edge machine learning models and algorithms to address complex problems across various industries. To excel in this role, professionals must develop a strong proficiency in programming languages such as Python, along with a deep understanding of machine learning frameworks like TensorFlow and PyTorch. Expertise in data preprocessing techniques and algorithm development is also quite crucial here. 


Kubernetes attacks are growing: Why real-time threat detection is the answer for enterprises

Attackers are ruthless in pursuing the weakest threat surface of an attack vector, and with Kubernetes containers runtime is becoming a favorite target. That’s because containers are live and processing workloads during the runtime phase, making it possible to exploit misconfigurations, privilege escalations or unpatched vulnerabilities. This phase is particularly attractive for crypto-mining operations where attackers hijack computing resources to mine cryptocurrency. “One of our customers saw 42 attempts to initiate crypto-mining in their Kubernetes environment. Our system identified and blocked all of them instantly,” Gil told VentureBeat. Additionally, large-scale attacks, such as identity theft and data breaches, often begin once attackers gain unauthorized access during runtime where sensitive information is used and thus more exposed. Based on the threats and attack attempts CAST AI saw in the wild and across their customer base, they launched their Kubernetes Security Posture Management (KSPM) solution this week. What is noteworthy about their approach is how it enables DevOps operations to detect and automatically remediate security threats in real-time. 


Begun, the open source AI wars have

Open source leader julia ferraioli agrees: "The Open Source AI Definition in its current draft dilutes the very definition of what it means to be open source. I am absolutely astounded that more proponents of open source do not see this very real, looming risk." AWS principal open source technical strategist Tom Callaway said before the latest draft appeared: "It is my strong belief (and the belief of many, many others in open source) that the current Open Source AI Definition does not accurately ensure that AI systems preserve the unrestricted rights of users to run, copy, distribute, study, change, and improve them." ... Afterwards, in a more sorrowful than angry statement, Callaway wrote: "I am deeply disappointed in the OSI's decision to choose a flawed definition. I had hoped they would be capable of being aspirational. Instead, we get the same excuses and the same compromises wrapped in a facade of an open process." Chris Short, an AWS senior developer advocate, Open Source Strategy & Marketing, agreed. He responded to Callaway that he: "100 percent believe in my soul that adopting this definition is not in the best interests of not only OSI but open source at large will get completely diluted."


What North Korea’s infiltration into American IT says about hiring

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles. Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. ... Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries. The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting. A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. 


Contesting AI Safety

The dangers posed by these machines arise from the idea that they “transcend some of the limitations of their designers.” Even if rampant automation and unpredictable machine behavior may destroy us, the same technology promises unimaginable benefits in the far future. Ahmed et al. describe this epistemic culture of AI safety that drives much of today’s research and policymaking, focused primarily on the technical problem of aligning AI. This culture traces back to the cybernetics and transhumanist movements. In this community, AI safety is understood in terms of existential risks—unlikely but highly impactful events, such as human extinction. The inherent conflict between a promised utopia and cataclysmic ruin characterizes this predominant vision for AI safety. Both the AI Bill of Rights and SB 1047 assert claims about what constitutes a safe AI model but fundamentally disagree on the definition of safety. A model deemed safe under SB 1047 might not satisfy the Safe and Effective principle of the White House AI Blueprint; a model that follows the AI Blueprint could cause critical harm. What does it truly mean for AI to be safe? 


Why Companies Should Embrace Ethical Hackers

Security researchers (or hackers, take your pick) are generally good people motivated by curiosity, not malicious intent. Making guesses, taking chances, learning new things, and trying and failing and trying again is fun. The love of the game and ethical principles are two separate things, but many researchers have both in spades. Unfortunately, the government has historically sided with corporations. Scared by the Matthew Broderick movie WarGames plot, Ronald Reagan initiated legislation that resulted in the Computer Fraud and Abuse Act of 1986 (CFAA). Good-faith researchers have been haunted ever since. Then there is The Digital Millennium Copyright Act (DMCA) of 1998, which made it explicitly illegal to “circumvent a technological measure that effectively controls access to a work protected under [copyright law],” something necessary to study many products. A narrow harbor for those engaging in encryption research was carved out in the DMCA, but otherwise, the law put researchers further in danger of legal action against them. All this naturally had a chilling effect as researchers grew tired of being abused for doing the right thing. Many researchers stopped bothering with private disclosures to companies with vulnerable products and took their findings straight to the public. 


Why AI Isn't Just Hype - But A Pragmatic Approach Is Required

It is far better to take a pragmatic view where you open yourself up to the possibilities but proceed with both caution and some help. That must start with working through the buzzwords and trying to understand what people mean, at least at a top level, by an LLM or a vector search or maybe even a Naive Bayes algorithm. But then, it is also important to bring in a trusted partner to help you move to the next stage to build an amazing new digital product, or to undergo a digital transformation with an existing digital product. Whether you’re in start-up mode, you are already a scale-up with a new idea, or you’re a corporate innovator looking to diversify with a new product – whatever the case, you don’t want to waste time learning on the job, and instead want to work with a small, focused team who can deliver exceptional results at the speed of modern digital business. ... Whatever happens or doesn’t happen to GenAI, as an enterprise CIO you are still going to want to be looking for tech that can learn and adapt from circumstance and so help you do the same. At the end of the day, hype cycle or not, AI is really the one tool in the toolbox that can continuously work with you to analyse data in the wild and in non-trivial amounts.



Quote for the day:

"Your attitude is either the lock on or key to your door of success." -- Denis Waitley

Daily Tech Digest - February 26, 2024

From deepfakes to digital candidates: AI’s political play

Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content. While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. ... Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. 


How data governance must evolve to meet the generative AI challenge

“With generative AI bringing more data complexity, organizations must have good data governance and privacy policies in place to manage and secure the content used to train these models,” says Kris Lahiri, co-founder and chief security officer of Egnyte. “Organizations must pay extra attention to what data is used with these AI tools, whether third parties like OpenAI, PaLM, or an internal LLM that the company may use in-house.” Review genAI policies around privacy, data protection, and acceptable use. Many organizations require submitting requests and approvals from data owners before using data sets for genAI use cases. Consult with risk, compliance, and legal functions before using data sets that must meet GDPR, CCPA, PCI, HIPAA, or other data compliance standards. Data policies must also consider the data supply chain and responsibilities when working with third-party data sources. “Should a security incident occur involving data that is protected within a certain region, vendors need to be clear on both theirs and their customers’ responsibilities to properly mitigate it, especially if this data is meant to be used in AI/ML platforms” says Jozef de Vries, chief product engineering officer of EDB.


Will AI Replace Consultants? Here’s What Business Owners Say.

“Most consultants aren’t actually that smart," said Michael Greenberg of Modern Industrialists. “They’re just smarter than the average person.” But he reckons the average machine is much smarter. “Consultants generally do non-creative tasks based around systematic analysis, which is yet another thing machines are normally better at than humans.” Greenberg believes some consultants, “doing design or user experience, will survive,” but “the run of the mill accounting degree turned business advisor will not.” Someone who has “replaced all of [her] consultants with ChatGPT already, and experienced faster growth,” is Isabella Bedoya, founder of MarketingPros.ai. However, she thinks because “most people don't know how to use AI, savvy consultants need to leverage it to become even more powerful, effective and efficient for their clients” and stay ahead of their game. Heather Murray, director at Beesting Digital, thinks the inevitable replacement of consultants is down to quality. “There are so many poor quality consultants that rely rigidly on working their clients through set frameworks, regardless of the individual’s needs. AI could do that easily.” 


Effective Code Documentation for Data Science Projects

The first step to effective code documentation is ensuring it’s clear and concise. Remember, the goal here is to make your code understandable to others – and that doesn’t just mean other data scientists or developers. Non-technical stakeholders, project managers, and even clients may need to understand what your code does and why it works the way it does. To achieve this, you should aim to use plain language whenever possible. Avoid jargon and overly complex sentences. Instead, focus on explaining what each part of your code does, why you made the choices you did, and what the expected outcomes are. If there are any assumptions, dependencies, or prerequisites for your code, these should be clearly stated. Remember, brevity is just as important as clarity. ... Data science projects are often dynamic, with models and data evolving over time. This means that your code documentation needs to be equally dynamic. Keeping your documentation up to date is critical to ensuring its usefulness and accuracy. A good practice here is to treat your documentation as part of your code, updating it as you modify or add to your code base.


Breaking down the language barrier: How to master the art of communication

Exactly how can cyber professionals go about improving their communication skills? According to Shapely, many people prefer to take short online learning courses. On-the-job coaching or mentorships are other popular upskilling strategies, providing quick and cost-effective practical learning opportunities. For those still early in their cybersecurity career, there is the option of building communication skills as part of a university degree. According to Kudrati, who teaches part-time at La Trobe University, many cybersecurity students must complete one subject on professional skills as part of their course. “This helps train students’ presentation skills, requiring them to present in front of lecturers and classmates as if they’re customers or business teams,” he says. Homing in on communication skills at university or early on in a cybersecurity professional’s career is also encouraged by Pearlson. In a study she conducted into the skills of cybersecurity professionals, she found that while communication skills were in demand, they were lacking, particularly among those in entry roles. 


4 core AI principles that fuel transformation success

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects. One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience that’s critical to success when trying to build and implement AI solutions to problems. Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if they’re perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.


Don’t Get Left Behind: How to Adopt Data-Driven Principles

Culture change remains the biggest hurdle to data-driven transformation. The disruption inherent in this evolution can put off some key stakeholders, but a few common-sense steps can guide your organization to tackle it successfully. Read the room - Executive buy-in is crucial to building a data-driven culture. Leadership must get behind the move so the rank-and-file will dedicate the time and effort needed to make the pivot. Map the landscape - You can’t change what you don’t know. Start by assessing the state of the organization: find the gaps in the existing data infrastructure and forecast any future analytics needs so you can plan for them. Evaluate your options - Building business intelligence (BI) and artificial intelligence (AI) systems from scratch is labor- and resource-intensive. ... However, there’s no need to reinvent the wheel; consider leveraging managed services to deal with scale and adaptation issues and ask for guidance from your provider’s data architects and scientists. Think single-source - Fragmentation detracts from the usefulness of data and can mask insights that would be available with better visibility. Implement integrated platforms that provide secure and scalable data pipelines, storage, and insights from end to end.


It’s time for security operations to ditch Excel

Microsoft Excel and Google Sheets are excellent for balancing books and managing cybersecurity budgets. However, they’re less ideal for tackling actual security issues, auditing, tracking, patching, and mapping asset inventories. Surely, our crown jewels deserve better. And yet, security operation teams are drowning in multi-tab tomes that require constant manual upkeep. Using these spreadsheets requires security operations to chase down every team in their organization for input on everything from the mapping of exceptions and end-of-life of machines to tracking hardware and operating systems. This is the only way to gather the information required on when, why and how certain security issues or tasks must be addressed. It’s no wonder, then, that the column reserved for due dates is usually mostly red. This is an industry-wide problem plaguing even multinational enterprises with top CISOs. Even those large enough to have GRC teams still use Excel for upcoming audits to verify remediations, delegate responsibilities and keep track of compliance certifications.


How Leadership Missteps Can Derail Your Cloud Strategy

Cloud computing involves many moving parts working in unison; therefore, leadership must be clear and concise regarding their cloud strategies. Yet often they are not. The problems arise from not acknowledging the complexity inherent in moving to the cloud. It's not a simple plug-and-play transition, but one that requires modifications not only to technology but also to business processes and organizational culture. For these reasons, the scope of the project is easily underestimated. Underestimating the complexity of transitioning to cloud computing can lead to significant pitfalls. Inadequate staff training, lax security measures, and rushed vendor choices together are just the tip of the iceberg. These oversights, seemingly minor at first, can snowball into significant issues down the line. But there's another layer: the iceberg beneath the surface. Focusing merely on the initial outlay while overlooking ongoing operational costs is like ignoring the currents below, both can unexpectedly steer your budget -- and your company -- off course. Acknowledging and managing operational expenses is vital for a thorough and financially stable cloud computing strategy.


The Art of Ethical Hacking: Securing Systems in the Digital Age

Stressing the obvious differences between malicious hacking and ethical hacking is vital. Even though the strategies utilized could be comparative, ethical hacking is carried out with permission and aims to strengthen security. On the other hand, malicious hacking entails unlawful admittance to steal, disrupt, or manipulate data without authorization. Operating within moral and legal bounds, ethical hackers make sure that their acts advance cybersecurity measures as a whole. Ethical hacking is the term used to describe a legitimate attempt to obtain unauthorized access to a computer system, program, or information. Ethical hacking includes imitating the methods and actions of vicious attackers. By using this method, security vulnerabilities can be found and fixed before a malicious attack can make use of them. ... As everybody and organizations keep on depending on technology for everyday tasks and business operations, the role of ethical hacking in strengthening cybersecurity will only become more crucial. A safe digital environment can be the difference between one that is susceptible to potentially catastrophic cyberattacks and one that embraces ethical hacking as a proactive strategy. 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden