Daily Tech Digest - March 02, 2024

Rust on the Rise: New Advocacy Expected to Advance Adoption

Recent advocacy and research efforts from agencies like the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), National Institute of Standards and Technology (NIST), and ONCD “can serve as valuable evidence of the considerable risk memory-safety vulnerabilities pose to our digital ecosystem,” the Rust Foundation‘s Executive Director & CEO, Rebecca Rumbul, told The New Stack. Moreover, Rumbul said The Rust Foundation believes that the Rust programming language is the most powerful tool available to address critical infrastructure security gaps. “As an organization, we are steadfast in our commitment to further strengthening the security of Rust through programs like our Security Initiative,” she said. Meanwhile, looking specifically at software development for space systems, the ONCD report says: both memory-safe and memory-unsafe programming languages meet the organization’s requirements for developing space systems. “At this time, the most widely used languages that meet all three properties are C and C++, which are not memory-safe programming languages, the report said.


The Power of Hyperautomation in Banking

Hyperautomation improves the operational efficiency within banks significantly as it helps in automating routine processes, that include document processing, transaction reconciliations, data entry, decreasing the requirement for manual intervention. Therefore, this not only augments processes but it also reduces errors, leading to a more reliable as well as cost-effective operation. Banks can use hyperautomation to offer personalized, 24/7 services to their customers. Chatbots & virtual assistants powered by Artificial Intelligence can respond to inquiries as well as perform transactions around the clock. Faster response times coupled with the ability for tailoring services to separate customer requirements leading to enhanced customer satisfaction as well as loyalty. “Hyperautomation facilitates organizations to improve customer experience by reducing the friction in user self-service applications and streamlining broken onboarding processes. It enables faster support and sales query resolution through relevant integrations, AI/ML, and assistive technologies,” says Arvind Jha, Former General Manager – Product Management and Marketing, Newgen Software.


What Is Data Completeness and Why Is It Important?

Data completeness is an important aspect of Data Quality. Data Quality is a reference to how accurate and reliable the data is overall. Data completeness specifically focuses on missing data or how complete the data is, rather than concerns of inaccurate or duplicated data. A lack of data completeness is normally the result of information that was never collected. For example, if a customer’s name and email address are supposed to be collected, but the email address is missing, it is difficult to communicate with the customer. ... Missing chunks of information restrict or bias the decision-making process. Attempting to perform analytics with incomplete data can produce blind spots and biases, and result in missed opportunities. Currently, business leaders use data analytics to make decisions that range from marketing to investment strategies to medical diagnostics. In some situations, data missing key pieces of information is still used, which can lead to dangerous mistakes and false conclusions. Assessing and improving data completeness should be done before performing analytics.


A socio-technical approach to data management is crucial in our decentralised world

To improve the odds of successfully building an effective data management strategy, working with a trusted and experienced data partner to help shift the organisation’s data culture is a crucial - and often missing - step. The Data and Analytics Leadership Annual Executive Survey 2023 found that cultural factors are the biggest obstacle to delivering value from data investments. Data fabrics, meshes and modern data stacks will continue to consolidate an increasingly decentralised world by making the management of data easier. However, to ensure control over security and governance, and to extract value from data that is trustworthy requires a tactical shift to what we call a socio-technical approach. In other words, any strategy must be made up of an investment in people, process and technology to be successful. This is because data management involves more than the technical aspects of data storage, processing and analysis. It also includes the social aspects of data governance, change management, data quality management, user upskilling and collaboration between different teams. Organisations that know how to use technology the best will have an edge over their competitors.


Blockchain is one step away from mainstream adoption

Blockchain’s growth is already reshaping traditional business processes and models. In the financial sector, blockchain facilitates faster and more secure transactions. Supply chain management benefits from increased transparency and traceability, ensuring the authenticity and integrity of products. Smart contracts automate and streamline complex agreements, minimizing the risk of fraud and error. And in addition to sparking rising trading volumes, the SEC’s approval of spot bitcoin ETFs sent a global signal of validation to governments reviewing the viability of blockchain applications in both the private and public sectors. Importantly, the evolution of blockchain has given credence to — and bestowed practicality upon — the concept of decentralized finance (DeFi). We’re already in a reality where traditional financial services are replicated, and even improved, using blockchain technology. This is transformative because it will eliminate the need for intermediaries, opening the door to financial participation for virtually anyone with internet access. This democratization of finance has the potential to provide financial services to underserved populations and redefine the global financial landscape.


Biometrics Regulation Heats Up, Portending Compliance Headaches

What this all means is that it will be complicated for companies doing business nationally because they will have to audit their data protection procedures and understand how they obtain consumer consent or allow consumers to restrict the use of such data and make sure they match the different subtleties in the regulations. Contributing to the compliance headaches: The executive order sets high goals for various federal agencies in how to regulate biometric information, but there could be confusion in terms of how these regulations are interpreted by businesses. For example, does a hospital's use of biometrics fall under rules from the Food and Drug Administration, Health and Human Services, the Cybersecurity and Infrastructure Security Agency, or the Justice Department? Probably all four. ... Meanwhile, AI-induced deepfake video impersonations by criminals that abuse biometric data like face scans are on the rise. Earlier this year, a deepfake attack in Hong Kong was used to steal more than $25 million, and there are certainly others who will follow as AI technology gets better and easier to use for producing biometric fakes. The conflicting regulations and criminal abuses could explain why consumer confidence in biometrics has taken a nosedive.


The Role of Data in Crafting Personalized Customer Journeys

Through comprehensive customer profiles, data is sourced from multiple touchpoints in silos such as online visitors, purchases done, forms, customer support units, social media engagement, mobile app usage, and other channels as recognized in the CRM system. This further facilitates real-time data processing and identifies customer behaviors and preferences. As briefly discussed previously, predictive analytics consumes historical customer data and powers forecasting of expected behaviors and preferences. This segments data based on different parameters such as demographics, behaviors, preferences, etc. Ultimately, it acts as the seed for planting responsive marketing campaigns. While we are at it, an important strategy is cross-channel integration. Given the scale of marketing landscape, it is important to consider all channels and systems. So, the data collected from multiple sources is then integrated and analyzed through data management platforms to create a cross-channel, unified 360 view. Such interoperability delivers an omnichannel experience, thereby increasing their lifetime value. To ensure better customer loyalty, implement practices in alignment with the regulations. 


Checkout Lessons: What Banks Need to Borrow from eCommerce

eCommerce has much to teach the financial and healthcare industries, which also experience high seasonality and peak traffic periods. Events like 401(k) sign-ups, healthcare enrollments, and tax days are notorious for bringing down systems. In my experience, performance is synonymous with user experience. ... Many digital-first banks don’t operate physical branches. Their success is due to a singular focus on user experience, performance, speed, flexibility, and a mobile-first approach. This is what has won over the current generation of young people who do not need to visit a teller. It’s crucial for banks to recognize the importance of these advancements and to take action. Otherwise, they risk losing their competitive edge. In the U.S., some banks perform exceptionally well with only an online presence, with USAA as a prime example. Some companies, like Capital One, are innovating by transforming their banks into cafés. They provide WiFi, allowing customers to work and do more than just banking. This shift dramatically enhances the user experience.


Fintech at its Finest: Adding Value with Innovation

The best fintech platforms are constantly listening to their customers. Whether that’s through harnessing the power of AI to create an optimal user experience or continuously innovating based on customer feedback, a good fintech is creating exactly what its customers want and need. ... The best fintech platforms have innovative technologies at their core and are increasingly harnessing AI and machine learning to enhance their services. But crucially, they are also designed to be intuitive for users. After all, businesses have just 10 minutes to set up digital accounts or risk losing consumer trust. Millennials and Gen Z make up a significant part of fintech’s core market, so it’s providers who can cater to tech-savvy generations and prioritise smooth customer experiences that will differentiate themselves in an increasingly crowded market. ... In the bustling world of fintech, the top platforms set themselves apart by cleverly blending practices to ensure they keep growing and succeed – even when faced with challenges. These platforms develop excellent solutions, using technologies like blockchain, AI and fancy data analytics to tackle old financial problems and improve user experiences. 


Enabling Developers To Become (More) Creative

What influence does collaboration have on creativity? Now we are starting to firmly tread into management territory! Since software engineering happens in teams, the question becomes how to build a great team that's greater than the sum of its parts. There are more than just a few factors that influence the making of so-called "dream teams". We could use the term "collective creativity" since, without a collective, the creativity of each genius would not reach as far. The creative power of the individual is more negligible than we dare to admit. We should not aim to recruit the lone creative genius, but instead try to build collectives of heterogeneous groups with different opinions that manage to push creativity to its limits. ... Managers can start taking simple actions towards that grand goal. For instance, by helping facilitate decision-making, as once communication goes awry in teams, the creative flow is severely impeded. Researcher Damian Tamburri calls this problem "social debt." Just like technical debt, when there's a lot of social debt, don't expect anything creative to happen. Managers should act as community shepherds to help reduce that debt.



Quote for the day:

"A real entrepreneur is somebody who has no safety net underneath them." -- Henry Kravis

Daily Tech Digest - March 01, 2024

Why Large Language Models Won’t Replace Human Coders

Are any of these GenAI tools likely to become substitutes for real programmers? Unless the accuracy of coding answers supplied by models increases to within an acceptable margin of error (i.e 98-100%), then probably not. Let’s assume for argument’s sake, though, that GenAI does reach this margin of error. Does that mean the role of software engineering will shift so that you simply review and verify AI-generated code instead of writing it? Such a hypothesis could prove faulty if the four-eyes principle is anything to go by. It’s one of the most important mechanisms of internal risk control, mandating that any activity of material risk (like shipping software) be reviewed and double-checked by a second, independent, and competent individual. Unless AI is reclassified as an independent and competent lifeform, then it shouldn’t qualify as one pair of eyes in that equation anytime soon. If there’s a future where GenAI becomes capable of end-to-end development and building Human-Machine Interfaces, it’s not in the near future. LLMs can do an adequate job of interacting with text and elements of an image. There are even tools that can convert web designs into frontend code.


The future of farming

SmaXtec’s solution requires cows to swallow what the company calls a “bolus” - a small device that consists of sensors to measure a cow’s pH and temperature, an accelerometer, and a small processor. “It sits inside the cow and constantly measures very important body health parameters, including temperature, the amount of water intake, the drinking volume, the activity of the animal, and the contraction of the rumen in the dairy cow,” Scherer said. Rumination is a process of regurgitation and re-digestion. “You could almost envision this as a Fitbit for cows,” he said, adding that by constantly measuring those parameters at a high density - short timeframes with high robustness and high accuracy - SmaXtec can make assessments about potential diseases that are about to break out. ... Small Robot Company is known for its Tom robot. Tom - the robot - distantly recalls memories of Doctor Who’s dog K9. The device wheels itself up and down fields, capturing images and mapping out the land. The data is then taken from Tom’s SSD and uploaded to the cloud, where an AI identifies the different plants and weeds, and provides a customized fertilizer and herbicide plan for the crops.


The CISO: 2024’s Most Important C-Suite Officer

Short- and long-term solutions to navigating increased regulatory and plaintiff bar scrutiny start with the CISO. Cybersecurity defense strategies, implementation and monitoring fall under the purview of the CISO, who must closely coordinate with other members of the C-suite as well as boards of directors. Recent lawsuits highlight individual fiduciary liability for cybersecurity controls and accurate disclosures. Individual liability demands increased knowledge of, participation in and shared ownership of cybersecurity defense decisions. Gone are the days when liability risks could be eliminated by placing the blame on a single security officer. Boards and other C-suite executives now have personal risks over company cybersecurity defenses and preparedness. CISOs carry primary ownership for formulating and maintaining robust cybersecurity defenses and preparedness. This starts with implementing secure by design and other leading security frameworks. It extends to effective real-time threat monitoring and continual technology assessment of company capabilities to defend against advanced cyber threats or the “Defining Threat of Our Time.”


Generative AI and the big buzz about small language models

LLMs can create a wide array of content from text and images to audio and video, with multimodal systems emerging to handle more than one of the above tasks. They process massive amounts of information to execute natural language processing (NLP) tasks that approximate human speech in response to prompts. As such, they are ideal for pulling from vast amounts of data to generate a wide range of content, as well as conversational AI tasks. This requires a significant number of servers, storage and the all-too-scarce GPUs that power the models — at a cost some organizations are unwilling or unable to bear. It’s also tough to satisfy ESG requirements when LLMs hog compute resources for training, augmenting, fine-tuning and other tasks organizations require to hone their models. In contrast, SLMs consume fewer computing resources than their larger brethren and provide surprisingly good performance — in some cases on par with LLMs depending on certain benchmarks. They’re also more customizable, allowing organizations to execute specific tasks. For instance, SLMs may be trained on curated data sets and run through retrieval-augmented generation (RAG) that help refine search. For many organizations, SLMs may be ideal for running models on premises.


Captive centers are back. Is DIY offshoring right for you?

Captive centers are no longer just means of value creation, providing cost savings and driving process standardization. They are driving organization-wide innovation, facilitating digital transformations, and contributing to revenue growth. Unlike earlier generations of what are increasingly being called “global capabilities centers,” which tended to be large operations set up by multinationals, more than half of last year’s new centers were launched by first-time adopters — and on the smaller side, with less than 250 full-time employees; in some cases, less than 50. The desire to build internal IT capabilities amid a tight talent market is at the heart of the trend. As companies have grown comfortable with offshore and nearshore delivery, the captive model offers the opportunity to tap larger populations of lower-cost talent without handing the reins to a third party. “Eroding customer satisfaction with outsourcing relationships — per some reports, at an all-time low — has caused some companies to opt to ‘do it themselves,’” says Dave Borowski, senior partner, operations excellence, at West Monroe. What’s more, establishing up a captive center no longer needs to be entirely DIY. 


Questioning cloud’s environmental impact

Contrary to popular belief, cloud computing is not inherently green. Cloud data centers require a lot of energy to power and maintain their infrastructure. That should be news to nobody. Cloud is becoming the largest user of data center space, perhaps only to be challenged by the growth of AI data centers, which are becoming a developer’s dream. But wait, don’t cloud providers use solar and wind? Although some use renewable energy, not all adopt energy-efficient practices. Many cloud services rely on coal-fired power. Ask cloud providers which data centers use renewable. Most will provide a non-answer, saying their power types are complex and ever-changing. I’m not going too far out on a limb in stating that most use nonrenewable power and will do so for the foreseeable future. The carbon emissions from cloud computing largely stem from the power consumed by the providers’ platforms and the inefficiencies embedded within applications running on these platforms. The cloud provider itself may do an excellent job in building a multitenant system that can provide good optimization for the servers they run, but they don’t have control over how well their customers leverage these resources.


Revolutionizing Real-Time Data Processing: The Dawn of Edge AI

For effective edge computing, efficient and computationally cost-effective technology is needed. One promising option is reservoir computing, a computational method designed for processing signals that are recorded over time. It can transform these signals into complex patterns using reservoirs that respond nonlinearly to them. In particular, physical reservoirs, which use the dynamics of physical systems, are both computationally cost-effective and efficient. However, their ability to process signals in real time is limited by the natural relaxation time of the physical system. This limits real-time processing and requires adjustments for best learning performance. ... Recently, Professor Kentaro Kinoshita, and Mr. Yutaro Yamazaki developed an optical device with features that support physical reservoir computing and allow real-time signal processing across a broad range of timescales within a single device. Speaking of their motivation for the study, Prof. Kinoshita explains: “The devices developed in this research will enable a single device to process time-series signals with various timescales generated in our living environment in real-time. In particular, we hope to realize an AI device to utilize in the edge domain.”


Agile software promises efficiency. It requires a cultural shift to get right

The end result of these fake agile practices is lip service and ceremonies at the expense of the original manifesto’s principles, Bacon said. ... To get agile right, Wickham recommended building on situations in your organization where agile is practiced relatively effectively. Most often, that involves teams building internal tools, such as administrative panels for customer support or CI/CD pipelines. Those use cases have more tolerance for “let’s put something up, ask for feedback, iterate, repeat,” he said. After all, internal customers expect to accept seeing something that’s initially imperfect. “This indicates to me that people comprehend agile and have at least a baseline understanding of how to use it, but a lack of willingness to use it as defined when it comes to external customers,” said Wickham. ... “Agile is an easy term to toss around as a ‘solution,’” Richmond said. “But effective agile does not have a cookie-cutter solution to improving execution.” Getting it right requires a focus on what has to happen to understand the company’s challenges, how those challenges manifest out of the business environment, in what way those challenges impact business outcomes, and then, finally, identifying how to apply agile concepts to the business.


Building a Strong Data Culture: A Strategic Imperative

Effective executive backing is crucial for prioritizing and financing data initiatives that help cultivate an organization’s data-centric culture. Initiatives such as data literacy programs equip employees with vital data skills that are fundamental to fostering such a culture. Nonetheless, these programs often fail to thrive without the robust support of leadership. Results from the same Alation research show that only 15 percent of companies with moderate or weak data leadership integrate data literacy across most departments or throughout the entire organization. This is in stark contrast to the 61 percent adoption rate in companies with strong data leadership. Moreover, strong data leadership involves more than just endorsement; it requires executives to actively engage and set an example in data culture initiatives. For instance, when an executive carves out time from her hectic schedule to partake in data literacy training, it conveys a much more powerful message to her team than if she were to simply instruct others to prioritize such training. This hands-on approach by leaders underscores the importance of data literacy and demonstrates their commitment to embedding a data-driven culture in the organization.


Cybercriminals harness AI for new era of malware development

Threat actors have already shown how AI can help them develop malware only with a limited knowledge of programming languages, brainstorm new TTPs, compose convincing text to be used in social engineering attacks, and also increase their operational productivity. Large language models such as ChatGPT remain in widespread use, and Group-IB analysts have observed continued interest on underground forums in ChatGPT jailbreaking and specialized generative pre-trained transformer (GPT) development, looking for ways to bypass ChatGPT’s security controls. Group-IB experts have also noticed how, since mid-2023, four ChatGPT-style tools have been developed for the purpose of assisting cybercriminal activity: WolfGPT, DarkBARD, FraudGPT, and WormGPT – all with different functionalities. FraudGPT and WormGPT are highly discussed tools on underground forums and Telegram channels, tailored for social engineering and phishing. Conversely, tools like WolfGPT, focusing on code or exploits, are less popular due to training complexities and usability issues. Yet, their advancement poses risks for sophisticated attacks.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - February 29, 2024

Why governance, risk, and compliance must be integrated with cybersecurity

Incorporating cybersecurity practices into a GRC framework means connected teams and integrated technical controls for the University of Phoenix, where GRC and cybersecurity sit within the same team, according to Larry Schwarberg, the VP of information security. At the university, the cybersecurity risk management framework is primarily created out of a consolidated view of NIST 800-171 and ISO 27001 standards, with this being used to guide other elements of its overall posture. “The results of the risk management framework feed other areas of compliance from external and internal auditors,” Schwarberg says. The cybersecurity team works closely with legal and ethics, compliance and data privacy, internal audit and enterprise risk functions to assess overall compliance with in-scope regulatory requirements. “Since our cybersecurity and GRC roles are combined, they complement each other and the roles focus on evaluating and implementing security controls based on risk appetite for the organization,” Schwarberg says. The role of leadership is to provide awareness, communication, and oversight to teams to ensure controls have been implemented and are effective. 


India's talent crunch: Why choose build approach over buying?

The primary challenge is the need for more workers equipped with digital skill sets. Despite the high demand for these skills, the current workforce needs to gain the requisite abilities, especially considering the constant evolution of technology. The lack of niche skill sets essential for working with advanced technologies like AI, blockchain, cloud, and data science further contributes to this gap. The turning point, however, is now within reach as businesses and professionals recognise the crucial need for upskilling and reskilling. At DXC India, we have embraced a strategy that prioritises internal talent development, favouring the 'build' approach over the 'buy' strategy. By upskilling our existing workforce with relevant, in-demand skills, we address our talent needs and foster individual career growth. This method is particularly effective as experienced employees can swiftly acquire new skills and undergo cross-training. This agility is an asset in navigating the rapidly evolving business landscape, benefiting employees and customers. Identifying the specific talent required and subsequently building that talent pool forms the crux of this strategy.


Why does AI have to be nice? Researchers propose ‘Antagonistic AI’

“There was always something that felt off about the tone, behavior and ‘human values’ embedded into AI — something that felt deeply ingenuine and out of touch with our real-life experiences,” Alice Cai, co-founder of Harvard’s Augmentation Lab and researcher at the MIT Center for Collective Intelligence, told VentureBeat. She added: “We came into this project with a sense that antagonistic interactions with technology could really help people — through challenging [them], training resilience, providing catharsis.” But it also comes from an innate human characteristic that avoids discomfort, animosity, disagreement and hostility. Yet antagonism is critical; it is even what Cai calls a “force of nature.” So, the question is not “why antagonism?,” but rather “why do we as a culture fear antagonism and instead desire cosmetic social harmony?,” she posited. Essayist and statistician Nassim Nicholas Taleb, for one, presents the notion of the “antifragile,” which argues that we need challenge and context to survive and thrive as humans. “We aren’t simply resistant; we actually grow from adversity,” Arawjo told VentureBeat.


How companies can build consumer trust in an age of privacy concerns

Aside from reworking the way they interact with customers and their data, businesses should also tackle the question of personal data and privacy with a different mindset – that of holistic identity management. Instead of companies holding all the data, holistic identity management offers the opportunity to “flip the script” and put the power back in the hands of consumers. Customers can pick and choose what to share with businesses, which helps build greater trust. ... Greater privacy and greater personalization may seem to be at odds, but they can go hand in hand. Rethinking their approach to data collection and leveraging new methods of authentication and identity management can help businesses create this flywheel of trust with customers. This will be all the more important with the rise of AI. “It’s never been cheaper or easier to store data, and AI is incredibly good at going through vast amounts of data and identifying patterns of aspects that actual humans wouldn’t even be able to see,” Gore explains. “If you take that combination of data that never dies and the AI that can see everything, that’s when you can see that it’s quite easy to misuse AI for bad purposes. ...”


Testing Event-Driven Architectures with Signadot

With synchronous architectures, context propagation is a given, supported by multiple libraries across multiple languages and even standardized by the OpenTelemetry project. There are also several service mesh solutions, including Istio and Linkerd, that handle this type of routing perfectly. But with asynchronous architectures, context propagation is not as well defined, and service mesh solutions simply do not apply — at least, not now: They operate at the request or connection level, but not at a message level. ... One of the key primitives within the Signadot Operator is the routing key, an opaque value assigned by the Signadot Service to each sandbox and route group that’s used to route requests within the system. Asynchronous applications also need to propagate routing keys within the message headers and use them to determine the workload version responsible for processing a message. ... This is where Signadot’s request isolation capability really shows its utility: This isn’t easily simulated with a unit test or stub, and duplicating an entire Kafka queue and Redis cache for each testing environment can create unacceptable overhead. 


The 7 Rs of Cloud Migration Strategy: A Comprehensive Overview

With the seven Rs as your compass, it’s time to chart your course through the inevitable challenges that arise on any AWS migration journey. By anticipating these roadblocks and proactively addressing them, you can ensure a smoother and more successful transition to the cloud. ... Navigating the vast and ever-evolving AWS ecosystem can be daunting, especially for organizations with limited cloud experience. This complexity, coupled with a potential skill gap in your team, can lead to inefficient resource utilization, suboptimal architecture choices, and delayed timelines. ... Migrating sensitive data and applications to the cloud requires meticulous attention to security protocols and compliance regulations. Failure to secure your assets can lead to data breaches, reputational damage, and hefty fines. ... While leveraging the full range of AWS services can offer significant benefits, over-reliance on proprietary solutions can create an unhealthy dependence on a single vendor. This can limit your future flexibility and potentially increase costs. ... While AWS offers flexible pricing models and optimization tools, managing cloud costs effectively requires ongoing monitoring and proactive adjustments.


What is a chief data officer? A leader who creates business value from data

The chief data officer (CDO) is a senior executive responsible for the utilization and governance of data across the organization. While the chief data officer title is often shortened to CDO, the role shouldn’t be confused with chief digital officer, which is also frequently referred to as CDO. ... Although some CIOs and CTOs find CDOs encroach on their turf, Carruthers says the boundaries are distinct. CDOs are responsible for areas such as data quality, data governance, master data management, information strategy, data science, and business analytics, while CIOs and CTOs manage and implement information and computer technologies, and manage technical operations, respectively. ... The chief data officer is responsible for the fluid that goes in the bucket and comes out; that it goes to the right place, and that it’s the right quality and right fluid to start with. Neither the bucket nor the water work without each other. ... Gomis says he’s seen chief data officers come from marketing backgrounds, and that some are MBAs who’ve never worked in data analytics before. “Most of them have failed, but the companies that hired them felt that the influencer skillset was more important than the data analytics skillset,” he says.


The UK must become intentional about data centers to meet its digital ambitions

For the UK to maintain its leadership position in DC’s, it’s not enough to just leave it to chance. A number of trends are now deciding investment flows both within the UK and on the global stage. First, land and power availability. Access to land and power is becoming increasingly constrained in London and surrounding areas. For example, properties in Slough have gone up by 44 percent since 2019, and the Greater London Authority has told some developers there won’t be electrical capacity to build in certain areas of the city until 2035. Data centers use large quantities of electricity, the equivalent of towns or small cities, in some cases, to power servers and ensure resilience in service. In West London, Distribution Network Operators have started to raise concerns about the availability of powerful grid supply points to meet the rapid influx of requests from data center operators wanting to co-locate adjacent to fiber optic cables that pass along the M4 corridor, and then cross the Atlantic. In response to these power and space concerns, the hyperscalers have already started to favor countries in Scandinavia. 


Rubrik CIO on GenAI’s Looming Technical Debt

This is a case of, “Hey, there’s a leak in the boat, and what are you going to do about it? Are you going to let things get drowned? Or are you going to make sure that there is an equal amount of water that leaves the boat?” So, you have to apply that thinking to your annual plan. Typically, I’ll say that there’s going to be a percentage of resources, budget, and effort I’m going to put into reducing tech debt … And that’s where you start competing with other business initiatives. You will have a bunch of business stakeholders that might look at that as something that should just be kicked down the road because they want to use that funding for something else. That’s where, I believe, educating a lot of my business leaders on what that does to the organization. When I don’t address that tech debt, on a regular basis, production SLAs start to deteriorate. ... There’s going to be some consolidation and some standardization across the board. So, the first couple of years are going to be rocky very everybody. But that doesn’t scare us, because we’re going to put a more robust governance on top of this new area. We need to have a lot more debates about this internally and say, “Let’s be cautious, guys. Because this is coming from all sides.”


How organizations can navigate identity security risks in 2024

IT, identity, cloud security and SecOps teams need to collaborate around a set of security and lifecycle management processes to support business objectives around security, timely access delivery and operational efficiency. These processes are best optimized by automating manual tasks, while ensuring that the ownership and accountability for manual tasks is well understood. In addition, quantifying and tracking business outcomes in terms of metrics highlights IAM’s effectiveness and identifies areas that need improvement or more automation. Utilizing IAM for cloud and Software as a Service (SaaS) applications introduces a spectrum of challenges, rooted in silos of identity. Each system or application has its own identity model and its own concept of various identity settings and permissions: accounts, credentials, groups, roles, entitlements and other access policies. Misconfigured permissions and settings heighten the likelihood of data breaches. To address these complexities, organizations need business users and security teams to collaborate on an identity management and governance framework and overarching processes for policy-based authentication, SSO, lifecycle management, security and compliance. Automation can streamline these processes and help ensure effective access controls.



Quote for the day:

“People may hear your voice, but they feel your attitude.” -- John Maxwell

Daily Tech Digest - February 28, 2024

3 guiding principles of data security in the AI era

Securing the AI: All AI deployments – including data, pipelines, and model output – cannot be secured in isolation. Security programs need to account for the context in which AI systems are used and their impact on sensitive data exposure, effective access, and regulatory compliance. Securing the AI model itself means identifying model risks, over-permissive access, and data flow violations throughout the AI pipeline. Securing from AI: Just like most new technologies, artificial intelligence is a double-edged sword. Cyber criminals are increasingly turning to AI to generate and execute attacks at scale. Attackers are currently leveraging generative AI to create malicious software, draft convincing phishing emails, and spread disinformation online via deep fakes. There’s also the possibility that attackers could compromise generative AI tools and large language models themselves. ... Securing with AI: How can AI become an integral part of your defense strategy? Embracing the technology for defense opens possibilities for defenders to anticipate, track, and thwart cyberattacks to an unprecedented degree. AI offers a streamlined way to sift through threats and prioritize which ones are most critical, saving security analysts countless hours. 


Web3 messaging: Fostering a new era of privacy and interoperability

Designed to be interoperable with various decentralized applications (DApps) and blockchain networks, Web3 messaging protocols enable developers to seamlessly integrate messaging functionality into their decentralized services — a stark contrast to their traditional equivalents that host closed ecosystems, which limit communication with users on other platforms. Beoble, a communication infrastructure and ecosystem that allows users to chat between wallets, is one of the Web3 messaging platforms ready to change how people use digital communication. The platform comprises a web-based chat application and a toolkit for seamless integration with DApps. Dubbed “WhatsApp for Web3,” Beoble removes the need for login methods like Twitter or Discord, instead mandating only a wallet for access. Users can log in using their wallets and send texts, images, videos, links and files across blockchain networks. Blockchain app users can utilize emojis and nonfungible token (NFT) stickers in their digital communication with Beoble, adding a layer of personality to their conversations. 


As data takes center stage, Codified wants to bring flexibility to governance

As Gupta sees it, many large companies are authoring policies and trying to implement them in various ways, but he sees software that is too rigid for today’s use cases, leaving them vulnerable, especially when they have to change policy. He wants to change that by translating policy into code that can be implemented in a variety of ways, connected to various applications that need access to the data, and easily changed when new customers or user categories come along. “We let you author policies in natural language, in a declarative way or using a UI - pick your favorite way - but when those policies are authored, we can codify them into something that can be implemented in a number of ways and can be very easily changed,” he said. To that end, the company also enables customers to set conditions, such as whether you’ve had security training in the last 365 days, or you’re already part of a team working on a sensitive project. Ultimately, this enables companies to set hard-coded data access rules based on who the employee is and the applications they are using or projects they are part of, rather than relying on creating groups on which to base these rules.


Looking Forward, Looking Back: A Quarter Century as a CISO

The first distributed denial of service (DDoS) attack occurred in 1999, followed by Code Red and Nimda worm cyberattacks that targeted web servers in 2001, and SQL Slammer in 2003 which spread rapidly and brought focus on the need to patch vulnerable systems. The end of the millennium also brought Y2K and the Millennium Bug, which exposed the vulnerability of existing computing infrastructures that formatted dates with only the final two digits and raised the profile of CISOs and other security professionals. Organizations recognized the necessity of dedicated executives responsible for managing cybersecurity risks. ... CISOs were soon making the news, and not always in a good way. Former Uber CISO Joe Sullivan was found guilty of felony obstruction of justice and concealing a data breach in October 2022. The following month, CISO Lea Kissner of Twitter (now X) resigned along with the company’s chief privacy officer and its chief compliance officer over concerns that Twitter’s new leadership was pushing for the release of products and platform changes without effective security reviews.


How Generative AI is Revamping Digital Transformation to Change How Businesses Scale

Crucially, generative AI can help to tailor the dining experience for customers in a way that significantly improves the quality of in-house or takeaway eating. This is achieved by GenAI models analyzing data like guest preferences, dietary restrictions, past orders, and behavior to offer personalized menu items and even recommend food and drink pairings. Generative AI will even be capable of using available datasets to generate offers on the fly as an instant call-to-action (CTA) if it deems an online visitor isn't yet ready to convert their interest into action. We're already seeing leading global restaurants announce the implementation of generative AI for their processes. ... Generative AI became the technological buzzword of 2023, and for good reason. However, there will be many hurdles to overcome in the development of the technology before it drives widespread digital transformation. Regulatory hurdles may be tricky to overcome due to issues in how AI programs can handle private data and utilize intellectual property (IP). Quality shortcomings could also cause issues in governance among early LLMs, and we've seen plenty of cases where language models "hallucinate" when dealing with unusual queries.


NIST CSF 2.0 released, to help all organizations, not just those in critical infrastructure

The CSF’s governance component emphasizes that cybersecurity is a major source of enterprise risk that senior leaders should consider alongside others, such as finance and reputation. “Developed by working closely with stakeholders and reflecting the most recent cybersecurity challenges and management practices, this update aims to make the framework even more relevant to a wider swath of users in the United States and abroad,” according to Kevin Stine, chief of NIST’s Applied Cybersecurity Division. ... The framework’s core is now organized around six key functions: Identify, Protect, Detect, Respond, and Recover, along with CSF 2.0’s newly added Govern function. When considered together, these functions provide a comprehensive view of the life cycle for managing cybersecurity risk. The updated framework anticipates that organizations will come to the CSF with varying needs and degrees of experience implementing cybersecurity tools. New adopters can learn from other users’ successes and select their topic of interest from a new set of implementation examples and quick-start guides designed for specific types of users...


Even LLMs need education—quality data makes LLMs overperform

Like any student, LLMs need a good source text to produce good outputs. As Satish Jayanthi of CTO and co-founder of Coalesce told us, “If there were LLMs in the 1700s, and we asked ChatGPT back then whether the earth is round or flat and ChatGPT said it was flat, that would be because that's what we fed it to believe as the truth. What we give and share with an LLM and how we train it will influence the output.” Organizations that operate in specialized domains will likely need to train or fine-tune LLMs of specialized data that teaches those models how to understand that domain. Here at Stack Overflow, we’re working with our Teams customers to incorporate their internal data into GenAI systems. When Intuit was ramping up their GenAI program, they knew that they needed to train their own LLMs to work effectively in financial domains that use tons of specialized language. And IBM, in creating an enterprise-ready GenAI platform in watsonx, made sure to create multiple domain-aware models for code, geospatial data, IT events, and molecules.


State of FinOps 2024: Reducing Waste and Embracing AI

Engineers remain the biggest beneficiary of FinOps observability, even though "engineering enablement" has dropped to a lower position in the report's ranking of surveyed priorities. This indicates that engineers are those best suited to responding to a sudden change in cost metrics. The report observes that the "engineering persona" is reported as getting the most value from both "FinOps training and self-service reporting." ... While waste reduction is a common driver across all respondents, segmenting the survey by cloud spend revealed that those with smaller budgets would tend to then prioritise improvements in the accuracy of billing forecasts. The report states that these respondents faced the challenge of understanding "the trajectory of spending" prior to it "getting out of hand." Most invested in low-effort solutions such as "manual adjustments" to generated forecast data. In contrast, those with larger budgets tended to prioritise the optimisation of commitment-based discounts to benefit from economies of scale. This included the right-sizing of "reserved instances, savings plans, committed use discounts," as well as specific negotiated discounts.


How to Develop an Effective Governance Risk and Compliance Strategy

“Overcoming silos and fostering communication needs to begin at the top,” Rothaar, says in an email interview. Furthermore, aligning GRC goals with broader business objectives ensures both executive management and individual departments recognize the impact that GRC initiatives have on organizational success. “Promoting a culture of communication with open dialogue and knowledge-sharing is essential to a successful and efficient GRC strategy,” she says. Ringel says organizations need to promote awareness and engagement with risk and compliance, because they influence every member of the organization. “You are only as strong as your weakest link when it comes to risk, so making sure everyone is on the same page and treating risk and compliance smartly is key,” she explains. Compliance is less directly obvious, but if those values are not communicated through every department--product design, development, customer support, marketing, and sales -- the end product will reflect that disconnect. “Not every employee needs to know specific regulations, but everyone needs to share the values of data governance and compliance,” Ringel says.


Data storage problems and how to fix them

When undertaking the journey to digitisation, it’s important to consider the issues and challenges and more importantly – know how to avoid them. ... It’s wise not to attempt a massive data overhaul all at once, especially before you’ve considered what data is valuable, how and where you will store the data and investigated the different options and models available. It all depends on the scope of transformation and the state the organisation is in. For start-ups, it’s a green field and the experience is as good as the plan and its periodic inspection and adaptation. For organisations with historic data to migrate, it can get complex. I have experienced both and the key was to have identified what data is valuable, a clear cut off date and policy on how far back we digitise. ... If you are unsure on where to start, consult an expert to determine the best solutions and view the initial costs as an investment. Digital transformation of data brings the benefits of creating efficiency and timesaving and with those, reduced costs. The long-term benefit can far outweigh the upfront costs. Digital systems are typically faster and more efficient than manual systems. 



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - February 27, 2024

Market incentives in the pursuit of resilient software and hardware

For cyber security to continue to evolve as a discipline, we need both quantitative and qualitative insights to understand those aspects that, when combined, work most effectively to address threat and risk, along with human factors and operational dimensions. These solutions then need to be coupled with a compelling narrative to explain our conclusions and objectives to a range of audiences. For the quantitative aspects, access to underlying data types and sources is critical. When we think about software and hardware specifically, there are many possible points of measurement which can contribute to our understanding of its intrinsic security and support assurance. ... Improving the resilience of our software and hardware technology stacks in ways that can scale globally is a multi-faceted, sociotechnical challenge. Creating the right market incentives is our priority. Without these in place, we cannot begin to make progress at the pace or scale we need. Our collective interventions to improve engineering best practices and more transparent behaviours must be driven by data, and targeted by research and innovation. All of this requires better access to skills and cyber education, improved tools, and accessible infrastructure. 


Is creating an in-house LLM right for your organization?

Before delving into the world of foundational models and LLMs, take a step back and note the problem you are looking to solve. Once you identify this, it’s important to determine which natural language tasks you need. Examples of these tasks include summarization, named entity recognition, semantic textual similarity, and question answering, among others. ... Before using an AI tool as a service, government agencies need to make sure the service they are using is safe and trustworthy, which isn’t usually obvious and not captured by just looking at an example set of output. And while the executive order doesn’t apply to private sector businesses, these organizations should take this into consideration if they should adopt similar policies. ... Your organization’s data is the most important asset to evaluate before training your own LLM. Those companies that have accumulated high-quality data over time are the luckiest in today’s LLM age, as data is needed at almost every step of the process including training, testing, re-training, and beta tests. High-quality data is the key to success when training an LLM, so it is important to consider what that truly means. 


Privacy Watchdog Cracks Down on Biometric Employee Tracking

In Serco's case, the ICO said Friday that the company had failed to demonstrate why using facial recognition technology and fingerprint scanning was "necessary or proportionate" and that by doing so it had violated the U.K. General Data Protection Regulation. "Biometric data is wholly unique to a person so the risks of harm in the event of inaccuracies or a security breach are much greater - you can't reset someone's face or fingerprint like you can reset a password," said U.K. Information Commissioner John Edwards. "Serco Leisure did not fully consider the risks before introducing biometric technology to monitor staff attendance, prioritizing business interests over its employees' privacy." "There have been a number of warnings that facial recognition and fingerprints are problematic," said attorney Jonathan Armstrong, a partner at Cordery Compliance. "Most data protection regulators don't like technology like this when it is mandatory for employees. If you're looking at this you'll need a solid data protection impact assessment setting out why the tech is needed, why there are no better solutions, and what you're doing to minimize the impact on those affected.


Cloud providers should play by same rules as telcos, EU commissioner tells MWC

“Currently, our regulatory framework is too fragmented. We are not making the most of our single market of 450 million potential customers. We need a true digital single market to facilitate the emergence of pan-European operators with the same scale and business opportunities as their counterparts in other regions of the world. And we need a true level playing field, because in a technological space where telecommunications and cloud infrastructures converge, there is no justification for them not to play by the same rules,” said the European Commissioner. This means, for Breton, “similar rights and obligations for all actors and end-users of digital networks. This means, first and foremost, establishing the ‘country of origin’ principle for telecoms infrastructure services, as is already the case for the cloud, to reduce compliance costs and investment requirements for pan-European operators.” ... Finally, Breton advocated “Europeanizing the allocation of licenses for the use of spectrum. In the technology race to 6G, we cannot afford any more delays in the concession process, with huge disparities in the timing of auctions and infrastructure deployment between Member States...”


Unlocking the Power of Automatic Dependency Management

Dependency automation relies on having a robust and reliable CI/CD system. Integrating automatic dependency updates into the development workflow is going to exercise this system much more frequently than updates done by hand, so this process demands robust testing and continuous integration practices. Any update, while beneficial, can introduce unexpected behaviors or compatibility issues. This is where a strong CI pipeline comes into play. By automatically testing each update in a controlled environment, teams can quickly identify and address any issues. Practices like automated unit tests, integration tests and even canary deployments are invaluable. They act as a safety net, ensuring that updates improve the software without introducing new problems. Investing in these practices streamlines the update process, but also reinforces overall software quality and reliability. ... Coupled with a robust infrastructure that supports these tools, including adequate server capacity and a reliable network, organizations can create an environment where automatic dependency updates thrive, contributing to a more resilient and agile development process.


What Is a Good Management Model in Agile Software Development?

Despite that recognition, an approach referred to by Jurgen Appello as “Management 2.0,” or “doing the right thing wrong” is still being used. This management style involves a manager who sticks strictly to the organizational hierarchy and forgets that human beings usually don’t like top-down control and mandatory improvements. Within this approach, 1:1 meetings are conducted with employees for individual goal setting. Although this could be considered a good idea — to manage people and their interests — the key is the way managers do it. They should be managing the system around their people instead of managing the people directly. ... Management 3.0, or “Doing the right thing,” can be the appropriate solution, in which organizations are considered to be complex and adaptive systems. Jurgen Appelo describes this style of management as “taking care of the system instead of manipulating the people.” Or, in other words, improving the environment so that “it keeps workers engaged and happy is one of the main responsibilities of management; otherwise, the organization fails to generate value.”


Hacker group hides malware in images to target Ukrainian organizations

The attacks detected by Morphisec delivered a malware loader known as IDAT or HijackLoader that has been used in the past to deliver a variety of trojans and malware programs including Danabot, SystemBC, and RedLine Stealer. In this case, UAC-0184 used it to deploy a commercial remote access trojan (RAT) program called Remcos. “Distinguished by its modular architecture, IDAT employs unique features like code injection and execution modules, setting it apart from conventional loaders,” the Morphisec researchers said. “It employs sophisticated techniques such as dynamic loading of Windows API functions, HTTP connectivity tests, process blocklists, and syscalls to evade detection. The infection process of IDAT unfolds in multiple stages, each serving distinct functionalities.” ... To execute the hidden payload, the IDAT loader employs another technique known as module stomping, where the payload is injected into a legitimate DLL file — in this case one called PLA.dll (Performance Logs and Alerts) — to lower the chances that an endpoint security product will detect it.


“Ruthlessly prioritize what’s critical”: Check Point expert on CISOs and the evolving attack surface

Ford argues that CISOs need to face the fact that they cannot secure everything and question how they can best spend their finite resources on attack surface management. This attitude has been reflected in the rise of strategies such as zero trust and Ford says in 2024 CISOs will continue to struggle to secure an increasing number of devices and data and contend with a landscape that is evolving in real time. “I think you have to do two things really well: the first thing I think you have to do is truly identify what’s critical and ruthlessly prioritize what’s critical. The second thing is you have to deploy lasting and intelligent solutions”, Ford argued. “[Businesses] have to deploy solutions that grow and contract with the business and can grow and contract as the threat landscape grows and contracts.” Mitchelson offers some examples of what this sort of deployment might look like in the future, arguing the most potential lies in using technology to realize this elastic functionality. “Internally within the structures of the organization, it could be a matrix type structure whereby you’re actually able to expand and contract internal resourcing within teams as to what you do”, Mitchelson suggests.


Gartner Identifies the Top Cybersecurity Trends for 2024

Security leaders need to prepare for the swift evolution of GenAI, as large language model (LLM) applications like ChatGPT and Gemini are only the start of its disruption. Simultaneously, these leaders are inundated with promises of productivity increases, skills gap reductions and other new benefits for cybersecurity. Gartner recommends using GenAI through proactive collaboration with business stakeholders to support the foundations for the ethical, safe and secure use of this disruptive technology. “It’s important to recognize that this is only the beginning of GenAI’s evolution, with many of the demos we’ve seen in security operations and application security showing real promise,” said ... Outcome-driven metrics (ODMs) are increasingly being adopted to enable stakeholders to draw a straight line between cybersecurity investment and the delivered protection levels it generates. According to Gartner, ODMs are central to creating a defensible cybersecurity investment strategy, reflecting agreed protection levels with powerful properties, and in simple language that is explainable to non-IT executives. 


Using AI to reduce false positives in secrets scanners

Secrets scanners were created to find leaks of such secrets before they reach malicious hands. They work by comparing the source code against predefined rules (regexes) that cover a wide range of secret types. Because they are rule-based, secrets scanners often trade between high false-positive rates on the one hand and low true-positive rates on the other. The inclination towards relaxed rules to capture more potential secrets results in frequent false positives, leading to alert fatigue among those tasked with addressing these alarms. Some scanners implement additional rule-based filters to decrease false alerts, like checking if the secret resides in a test file or whether it looks like a code variable, function call, CSS selection, etc., through semantic analysis. ... AI can play a role in overcoming this challenge. Large Language Model (LLM) can be directed at vast amounts of code and fine-tuned (trained) to understand the nuance of secrets and when they should be considered false-positive. Given a secret and the context in which it was introduced, this model would then know whether it should be flagged. Using this approach will reduce the number of false positives while keeping true positive rates stable.



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - February 26, 2024

From deepfakes to digital candidates: AI’s political play

Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content. While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. ... Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. 


How data governance must evolve to meet the generative AI challenge

“With generative AI bringing more data complexity, organizations must have good data governance and privacy policies in place to manage and secure the content used to train these models,” says Kris Lahiri, co-founder and chief security officer of Egnyte. “Organizations must pay extra attention to what data is used with these AI tools, whether third parties like OpenAI, PaLM, or an internal LLM that the company may use in-house.” Review genAI policies around privacy, data protection, and acceptable use. Many organizations require submitting requests and approvals from data owners before using data sets for genAI use cases. Consult with risk, compliance, and legal functions before using data sets that must meet GDPR, CCPA, PCI, HIPAA, or other data compliance standards. Data policies must also consider the data supply chain and responsibilities when working with third-party data sources. “Should a security incident occur involving data that is protected within a certain region, vendors need to be clear on both theirs and their customers’ responsibilities to properly mitigate it, especially if this data is meant to be used in AI/ML platforms” says Jozef de Vries, chief product engineering officer of EDB.


Will AI Replace Consultants? Here’s What Business Owners Say.

“Most consultants aren’t actually that smart," said Michael Greenberg of Modern Industrialists. “They’re just smarter than the average person.” But he reckons the average machine is much smarter. “Consultants generally do non-creative tasks based around systematic analysis, which is yet another thing machines are normally better at than humans.” Greenberg believes some consultants, “doing design or user experience, will survive,” but “the run of the mill accounting degree turned business advisor will not.” Someone who has “replaced all of [her] consultants with ChatGPT already, and experienced faster growth,” is Isabella Bedoya, founder of MarketingPros.ai. However, she thinks because “most people don't know how to use AI, savvy consultants need to leverage it to become even more powerful, effective and efficient for their clients” and stay ahead of their game. Heather Murray, director at Beesting Digital, thinks the inevitable replacement of consultants is down to quality. “There are so many poor quality consultants that rely rigidly on working their clients through set frameworks, regardless of the individual’s needs. AI could do that easily.” 


Effective Code Documentation for Data Science Projects

The first step to effective code documentation is ensuring it’s clear and concise. Remember, the goal here is to make your code understandable to others – and that doesn’t just mean other data scientists or developers. Non-technical stakeholders, project managers, and even clients may need to understand what your code does and why it works the way it does. To achieve this, you should aim to use plain language whenever possible. Avoid jargon and overly complex sentences. Instead, focus on explaining what each part of your code does, why you made the choices you did, and what the expected outcomes are. If there are any assumptions, dependencies, or prerequisites for your code, these should be clearly stated. Remember, brevity is just as important as clarity. ... Data science projects are often dynamic, with models and data evolving over time. This means that your code documentation needs to be equally dynamic. Keeping your documentation up to date is critical to ensuring its usefulness and accuracy. A good practice here is to treat your documentation as part of your code, updating it as you modify or add to your code base.


Breaking down the language barrier: How to master the art of communication

Exactly how can cyber professionals go about improving their communication skills? According to Shapely, many people prefer to take short online learning courses. On-the-job coaching or mentorships are other popular upskilling strategies, providing quick and cost-effective practical learning opportunities. For those still early in their cybersecurity career, there is the option of building communication skills as part of a university degree. According to Kudrati, who teaches part-time at La Trobe University, many cybersecurity students must complete one subject on professional skills as part of their course. “This helps train students’ presentation skills, requiring them to present in front of lecturers and classmates as if they’re customers or business teams,” he says. Homing in on communication skills at university or early on in a cybersecurity professional’s career is also encouraged by Pearlson. In a study she conducted into the skills of cybersecurity professionals, she found that while communication skills were in demand, they were lacking, particularly among those in entry roles. 


4 core AI principles that fuel transformation success

Around 86% of software development companies are agile, and with good reason. Adopting an agile mindset and methodologies could give you an edge on your competitors, with companies that do seeing an average 60% growth in revenue and profit as a result. Our research has shown that agile companies are 43% more likely to succeed in their digital projects. One reason implementing agile makes such a difference is the ability to fail fast. The agile mindset allows teams to push through setbacks and see failures as opportunities to learn, rather than reasons to stop. Agile teams have a resilience that’s critical to success when trying to build and implement AI solutions to problems. Leaders who display this kind of perseverance are four times more likely to deliver their intended outcomes. Developing the determination to regroup and push ahead within leadership teams is considerably easier if they’re perceived as authentic in their commitment to embed AI into the company. Leaders can begin to eliminate roadblocks by listening to their teams and supporting them when issues or fears arise. That means proactively adapting when changes occur, whether this involves more delegation, bringing in external support, or reprioritizing resources.


Don’t Get Left Behind: How to Adopt Data-Driven Principles

Culture change remains the biggest hurdle to data-driven transformation. The disruption inherent in this evolution can put off some key stakeholders, but a few common-sense steps can guide your organization to tackle it successfully. Read the room - Executive buy-in is crucial to building a data-driven culture. Leadership must get behind the move so the rank-and-file will dedicate the time and effort needed to make the pivot. Map the landscape - You can’t change what you don’t know. Start by assessing the state of the organization: find the gaps in the existing data infrastructure and forecast any future analytics needs so you can plan for them. Evaluate your options - Building business intelligence (BI) and artificial intelligence (AI) systems from scratch is labor- and resource-intensive. ... However, there’s no need to reinvent the wheel; consider leveraging managed services to deal with scale and adaptation issues and ask for guidance from your provider’s data architects and scientists. Think single-source - Fragmentation detracts from the usefulness of data and can mask insights that would be available with better visibility. Implement integrated platforms that provide secure and scalable data pipelines, storage, and insights from end to end.


It’s time for security operations to ditch Excel

Microsoft Excel and Google Sheets are excellent for balancing books and managing cybersecurity budgets. However, they’re less ideal for tackling actual security issues, auditing, tracking, patching, and mapping asset inventories. Surely, our crown jewels deserve better. And yet, security operation teams are drowning in multi-tab tomes that require constant manual upkeep. Using these spreadsheets requires security operations to chase down every team in their organization for input on everything from the mapping of exceptions and end-of-life of machines to tracking hardware and operating systems. This is the only way to gather the information required on when, why and how certain security issues or tasks must be addressed. It’s no wonder, then, that the column reserved for due dates is usually mostly red. This is an industry-wide problem plaguing even multinational enterprises with top CISOs. Even those large enough to have GRC teams still use Excel for upcoming audits to verify remediations, delegate responsibilities and keep track of compliance certifications.


How Leadership Missteps Can Derail Your Cloud Strategy

Cloud computing involves many moving parts working in unison; therefore, leadership must be clear and concise regarding their cloud strategies. Yet often they are not. The problems arise from not acknowledging the complexity inherent in moving to the cloud. It's not a simple plug-and-play transition, but one that requires modifications not only to technology but also to business processes and organizational culture. For these reasons, the scope of the project is easily underestimated. Underestimating the complexity of transitioning to cloud computing can lead to significant pitfalls. Inadequate staff training, lax security measures, and rushed vendor choices together are just the tip of the iceberg. These oversights, seemingly minor at first, can snowball into significant issues down the line. But there's another layer: the iceberg beneath the surface. Focusing merely on the initial outlay while overlooking ongoing operational costs is like ignoring the currents below, both can unexpectedly steer your budget -- and your company -- off course. Acknowledging and managing operational expenses is vital for a thorough and financially stable cloud computing strategy.


The Art of Ethical Hacking: Securing Systems in the Digital Age

Stressing the obvious differences between malicious hacking and ethical hacking is vital. Even though the strategies utilized could be comparative, ethical hacking is carried out with permission and aims to strengthen security. On the other hand, malicious hacking entails unlawful admittance to steal, disrupt, or manipulate data without authorization. Operating within moral and legal bounds, ethical hackers make sure that their acts advance cybersecurity measures as a whole. Ethical hacking is the term used to describe a legitimate attempt to obtain unauthorized access to a computer system, program, or information. Ethical hacking includes imitating the methods and actions of vicious attackers. By using this method, security vulnerabilities can be found and fixed before a malicious attack can make use of them. ... As everybody and organizations keep on depending on technology for everyday tasks and business operations, the role of ethical hacking in strengthening cybersecurity will only become more crucial. A safe digital environment can be the difference between one that is susceptible to potentially catastrophic cyberattacks and one that embraces ethical hacking as a proactive strategy. 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden