Daily Tech Digest - October 31, 2023

Do programming certifications still matter?

Hiring is one area where programming certifications definitely play a role. “One of the key benefits of having programming certifications is that they provide validation of a candidate's skills and knowledge in a particular programming language, framework, or technology,” says Aleksa Krstic, CTO at Localizely, a provider of a cloud-based translation platform. “Certifications can demonstrate that the individual has met certain standards and has the expertise required to perform a specific job.” For employers, programming certifications offer several advantages, Krstic says. “They can help streamline the hiring process, by providing a benchmark for assessing candidates' skills and knowledge,” he says. “Certifications can also serve as a way to filter out applicants who do not meet the minimum requirements.” In cases where multiple candidates are equally qualified, having a relevant certification can give one candidate an edge over others, Krstic says. “When it comes to certifications in general, when we see a junior to mid-level developer armed with programming certifications, it's a big green light for our hiring team,” says Michał Kierul is the CEO of software company SoftBlue


Overseeing generative AI: New software leadership roles emerge

In addition to line-of-business expertise, the rise of AI will mean there is also a growing focus on prompt engineering and in-context learning capabilities. Databricks' Zutshi says, "This is a newer ability for developers to optimize prompts for large language models and build new capabilities for customers, further expanding the reach and capability of AI tools." Yet another area where software leaders will need to take the lead is AI ethics. Software engineering leaders "must work with, or form, an AI ethics committee to create policy guidelines that help teams responsibly use generative AI tools for design and development," Gartner's Khandabattu reports in her analysis. Software leaders will need to identify and help "to mitigate the ethical risks of any generative AI products that are developed in-house or purchased from third-party vendors." Finally, recruiting, developing, and managing talent will also get a boost from generative AI, Khandabattu adds. Generative AI applications can speed up hiring tasks, such as performing a job analysis and transcribing interview summaries.


Generative Agile Leadership: What The Fourth Industrial Revolution Needs

Expanding the metaphor of the head, heart and hands, I've developed eight generative agile leadership (GAL) principles; they are the structure needed to create resilient teams of happy, contributing people who amplify satisfied customers and deliver outcomes for a thriving business. ... The GAL principles come from Peter Senge's learning organization and Ron Westrum’s organizational cultures. The learning organization is an adaptive entity that expands the capabilities of people and the whole system. The generative model is a performance-oriented organizational culture to ensure that people have high trust and low blame to increase the ability to express new ideas. ... The great-person leadership style that emphasizes that leaders are made and not born will not age well in the 4IR. The human-centered generative leadership model is the best approach to leading the four generations. The GAL principles are rooted in the idea that leaders should help their employees grow and develop as individuals. Generative leaders focus on creating a learning environment and providing their employees with opportunities to reach their full potential.


IT Must Clean Up Its Own Supply Chain

At the end of our supply chain “clean up” exercise, we were pleased that we had gained a good handle on our vendor services and products. This would enable us to operate more efficiently. We were also determined to never fall into this supply chain quagmire again! To avoid that, we created a set of ongoing supply chain management practices designed to maintain our supply chain on a regular basis. We met regularly with vendors, designed a “no exceptions” contract review as part of every RFP process, and no longer settled for boilerplate vendor contracts that didn’t have expressly stated SLAs. We also made it a point to attend key vendor conferences and to actively participate in vendor client forums, because we believed it would give us an opportunity to influence vendor product and service directions so they could better align with our own. End to end, this exercise consumed time, and resources, but it succeeded in capturing our attention. Attention to IT supply chains is even more relevant today as IT increasingly gets outsourced to the cloud


‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Hancock said genAI development companies are waiting to see how aggressive “or not” government regulators will be with IP protections. “I suspect, as is often the case, we’ll look to Europe to lead here. They’re often a little more comfortable protecting data privacy than the US is, and then we end up following suit,” Hancock said. To date, government efforts to address IP protection against genAI models are at best uneven, according to Litan. “The EU AI Act proposes a rule that AI model producers and developers must disclose copyright materials used to train their models. Japan says AI generated art does not violate copyright laws,” Litan said. “US federal laws on copyright are still non-existent, but there are discussions between government officials and industry leaders around using or mandating content provenance standards.” Companies that develop genAI are more often turning away from indiscriminate scraping of online content and instead purchasing content to ensure they don’t run afoul of IP statutes. That way, they can offer customers purchasing their AI services reassurance they won’t be sued by content creators.


SEC sues SolarWinds and its CISO for fraudulent cybersecurity disclosures

The SolarWinds case could act as a pivotal point for the role of a CISO, transforming it into one that requires a lot more scrutiny and responsibility. "SolarWinds incident highlights the responsibility of CISOs of publicly listed companies in not only managing the cyberattacks but also proactively informing customers and investors about their cybersecurity readiness and controls," said Pareekh Jain, chief analyst at Pareekh Consulting. "This lawsuit highlights that there were red flags earlier that the CISO failed to disclose. This will make corporations and CISOs take notice and take proactive security disclosure more seriously similar to how CFOs take financial information disclosure seriously." "There are many unknowns here; we don’t know if the CISO 'succumbed' to pressure from other leaders or if he was complicit in the hack," said Agnidipta Sarkar, vice president for CISO Advisory at ColorTokens Inc. "In either case, he is the target. But the reality is that the CISO is a very complex role. We are constantly required to navigate internal politics and pushbacks, and unless you are on your toes, you will be at the mercy of external forces at a scale no other CXO is exposed to."


Why adaptability is the new digital transformation

Sustainability and resilience are mature management disciplines because a lot of attention has been paid to developing strategies and implementing solutions to address them. When it comes to adaptability, however, apart from agile methodologies and adaptation as it relates to climate change, there’s very little to learn from in terms of the body of work, which is why I addressed this issue in “A Guide to Adaptive Government: Preparing for Disruption.” Adaptive systems and resilient systems are often confused and thought of as interchangeable, but there’s a vast difference between the two concepts. Whereas an adaptive system restructures or reconfigures itself to best operate in and optimize for the ambient conditions, a resilient system often simply has to restore or maintain an existing steady state. In addition, whereas resilience is a risk management strategy, adaptability is both a risk management and an innovation strategy. The philosophy behind adaptive systems is more about innovation than risk management. It assumes from the start, that there are no steady state conditions to operate within, but that the external environment is constantly changing.


Bringing Harmony to Chaos: A Dive into Standardization

Companies with different engineering teams working on various products often emphasize the importance of standardization. This process helps align large teams, promoting effective collaboration despite their diverse focuses. By ensuring consistency in non-functional aspects, such as security, cost, compliance and observability, teams can interact smoothly and operate in harmony, even with differing priorities. Standardizing these non-functional elements is key for maintaining system strength and resilience. It helps in setting consistent guidelines and practices across the company, minimizing conflicts. The aim is to seamlessly integrate standardization within these elements to improve adaptability and consistency. However, achieving this standardization isn’t easy. Differences in operational methods can lead to inconsistencies. ... The aim of standardization is to create smooth and uniform processes. However, achieving this isn’t always easy. Challenges arise from different team goals, changing technology and the tendency to unnecessarily create something new.


When tightly managing costs, smart founders will be rigorous, not ruthless 

Instead of ruthless, indiscriminate cost-cutting, it is wise to be very frugal about what doesn’t matter while you continue maintaining or even moderately investing in the things that do matter. When making cuts, never lose sight of your people. They’re anxious about the future, and you can’t expect to add more stress and excessive demands to already-stressed workers. ... The outright elimination of things like team lunches, in-person meetings and little daily perks creates instant animosity. Thoughtful cuts instead create visible and tangible reminders of the current environment, especially when considering how important in-person gatherings are to sustaining a robust culture in a remote work environment. Instead of quarterly in-person employee meetups, move to annual and replace the others with a DoorDash gift card and a video meeting. Curtailing all travel — both sales calls and team meetups — not only hurts morale, it allows justifiable excuses for missed targets, lost deals and churned customers.


A Beginner's Guide to Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation is a method that combines the powers of large pre-trained language models (like the one you're interacting with) with external retrieval or search mechanisms. The idea is to enhance the capability of a generative model by allowing it to pull information from a vast corpus of documents during the generation process. ... RAG has a range of potential applications, and one real-life use case is in the domain of chat applications. RAG enhances chatbot capabilities by integrating real-time data. Consider a sports league chatbot. Traditional LLMs can answer historical questions but struggle with recent events, like last night's game details. RAG allows the chatbot to access up-to-date databases, news feeds and player bios. This means users receive timely, accurate responses about recent games or player injuries. For instance, Cohere's chatbot provides real-time details about Canary Islands vacation rentals — from beach accessibility to nearby volleyball courts. Essentially, RAG bridges the gap between static LLM knowledge and dynamic, current information.



Quote for the day:

“Vulnerability is the birthplace of innovation, creativity, and change.” -- Brené Brown

Daily Tech Digest - October 28, 2023

Surviving a ransomware attack begins by acknowledging it’s inevitable

Senior management teams that see ransomware attacks as inevitable are quicker to prioritize actions that seek to reduce the risk of an attack and contain one when it happens. This mindset redirects board-level discussions of cybersecurity as an operating expense to a long-term investment in risk management. CISOs need to be part of that discussion and have a seat on the board. With the inevitability of ransomware attacks and risks to the core part of any business, CISOs must guide boards and provide them with insights to minimize risk. A great way for CISOs to gain a seat on boards is to show how their teams drive revenue gains by providing continuous operations and reducing risks. “When your board wants to talk about ransomware, remind them that it might take the form of day-to-day improvements — in your patching cadence, how you manage identity, how you defend environments and do infrastructure as code, how you do immutable backups and so forth,” Baer told VentureBeat.


Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI

A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning “will never happen” into “will happen faster than you think.” “It’s important to talk about where it’s all headed,” he says, before predicting the development of artificial general intelligence (by which he means machines as smart as humans) as if it were as sure a bet as another iPhone: “At some point we really will have AGI. Maybe OpenAI will build it. Maybe some other company will build it.” Since the release of its sudden surprise hit, ChatGPT, last November, the buzz around OpenAI has been astonishing, even in an industry known for hype. No one can get enough of this nerdy $80 billion startup. World leaders seek (and get) private audiences. Its clunky product names pop up in casual conversation. OpenAI’s CEO, Sam Altman, spent a good part of the summer on a weeks-long outreach tour, glad-handing politicians and speaking to packed auditoriums around the world. 


Lack of federal data privacy law seen hurting IT security

Dawson said the challenge will be overcoming two significant misconceptions about data collection. "The two big myths in this space are, 'They already have everything, so why bother?' and, 'If you have nothing to hide, what are you worried about?'" she said. "Those are two very deliberately structured myths to enable this sense of complacency about all of this data collection." Data collection occurs in multiple facets of consumer life, whether that's through online shopping, social media, travel or even online searches. Dawson said companies bring those data points together to create a 360-degree view of a consumer. She asserted that if consumers fully grasped the extent of companies' data collection, they might not consent willingly. ... "All of this data collection -- here's the church you go to, here's the alcohol you like, here's the guilty pleasure you like to read that nobody knows about. Now all of that can be merged together and can create a very different picture about your life in a way that people are probably not going to be very comfortable with," she said.


Why Infrastructure as Code Is Vital for Modern DevOps

Due to its ability to tackle the ownership problem, DevOps teams have embraced IaC in droves. Because of its ability to abstract, simplify and standardize deployments, IaC has proven a great boon in helping teams achieve continuous integration and continuous delivery (CI/CD). IaC has proven useful to CI/CD practices because it allows DevOps teams to make iterative improvements on apps and services without having to reassign or reconfigure an underlying piece of infrastructure. With IaC, developer teams can focus just on the application, with the onus for infrastructure configuration being on the respective owner as a separate workflow. This is especially useful for complex infrastructure arrangements, such as Kubernetes clusters. Additionally, IaC instructions can be monitored, committed and reverted by teams with ease. Just as they would with a regular coding workflow, code for IaC tools can be rapidly iterated on for infrastructure reconfiguration on the fly to reflect the pace of innovation in a CI/CD environment.


Why CIOs and CDOs Need to Rethink Data Management and Tap AI To Maximize Insights

The difference between the old model and the modern approach is that in the past, business leaders were swamped with data and tried to sift through it to find business insights. Today, in the era of AI and ML, data-driven companies start with the end goal of relevant business insights and then work backwards by delving into the data. “The top-down approach is a great way to pursue business insights because, if you know what you’re looking for in terms of measuring the efficiency of your business group, and if you have day-to-day challenges that you’re looking to better understand, it would absolutely help to use that as the driver,” says Ajay. This approach ensures companies derive insights from data they know is available. Starting with specific business challenges and finding the corresponding data enables businesses to pull together data fragments, aggregate them, and define specific metrics and KPIs that can be acted on. ... One of the critical drivers for accelerating business growth is the cloud. Ajay explained how he advocates for businesses to recognize how their cloud vendors are helping them accelerate their data-driven ambitions. 


How will cyber security evolve in the data-driven world?

Data breaches within the automotive sector have become more frequent, particularly involving well-known manufacturers and brands. Earlier this year there was a data leak of Toyota customers in Japan which was publicly available for a decade due to a simple technical error. Over two million customers had data exposed—that’s nearly the entire customer base which had signed up for Toyota’s main cloud service platforms since 2012. Then, prominent automotive retailer Arnold Clarke was blackmailed by hackers after suffering a data breach. It was reported that customers had their addresses, passports and national insurance numbers leaked on the dark web following a cyber attack on the car retail giant. More recently, Tesla disclosed a data breach impacting roughly 75,000 people. Notably, this is the result of a whistle-blower leak rather than a malicious cyber attack. The compromised information includes names, contact information, and employment-related records associated with current and former employees as well as customer bank details, production secrets, and customer complaints regarding driver assistance systems.


White House to issue AI rules for federal employees

For companies developing AI, the executive order might necessitate an overhaul in how they approach their practices, according to Adnan Masood, chief AI architect at digital transformation services company UST. The new rules may also driving up operational costs initially. "However, aligning with national standards could also streamline federal procurement processes for their products and foster trust among private consumers," Masood said. "Ultimately, while regulation is necessary to mitigate AI’s risks, it must be delicately balanced with maintaining an environment conducive to innovation. "If we tip the scales too far towards restrictive oversight, particularly in research, development, and open-source initiatives, we risk stifling innovation and conceding ground to more lenient jurisdictions globally," Masood continued. "The key lies in making regulations that safeguard public and national interests while still fueling the engines of creativity and advancement in the AI sector." Masood said the upcoming regulations from the White House have been "a long time coming, and it’s a good step [at] a critical juncture in the US government's approach to harnessing and containing AI technology.


How Collaboration Among Stakeholders Can Help Better Manage Insider Threats

Not surprisingly, the most effective approach includes a combination of people, processes, and technology, starting with the latter. Perhaps the most significant challenge is detecting unauthorized or inappropriate viewing of patient records, especially given the “wide span of entry points to gain access to these environments and to the data,” said Fasolo. And while most organizations would like to be able to continuously monitor access to each and every patient record, it simply isn’t realistic. What often ends up happening, according to Culbertson, is that security teams focus their energy on mitigating serious risks. The problem with that tradeoff, however, is that most incidents don’t happen out of the blue. “If you look at an individual’s behavior retrospectively, you see that they did some benign things and built on them,” he noted. “They test the system,” realizing that low-risk incidents are far less likely to be investigated. But that’s where the real threat lies, he said, noting that Protenus’ Protect Patient Privacy solution leverages artificial intelligence to audit “every access to every record, every day. 


How Your CTO And CFO Can Work Together On Tech Costs

CFOs and CTOs need to work together to forecast the TCO annually over the life of an application for budgeting to be more reflective of the true costs to run the enterprise application. This process involves identifying the potential cost takeouts as well, because if code can be 30 percent more efficient, this would further reduce the cost. The CFO is not the only one who needs this data; everyone from the developers to the management does. Our goals as technology professionals should be to understand the efficiency and costs of the features or code we are creating before they are promoted to production. This is the only way to truly control the costs of the application and enterprise cloud bills, which are often way over budget since this mindset is not currently built into operations. ... Just like a car, every server has an engine (capacity) and gas mileage (efficiency) and is run at a level of speed that will either tax the system or is sustainable. We check these regularly as a matter of course for our cars; why not for our technology?


The Relationship Between Enterprise Tech Debt And Systemic Risk

We are currently witnessing a significant shift as boardrooms are being forced to address systemic risk. The recent changes announced by the SEC regarding cybersecurity expertise on boards are part of that shift, and boards and executive teams are being tasked with directly addressing systemic risk within their organizations. Systemic risk is one of the biggest challenges facing most organizations today, and tech debt is one of the primary drivers of systemic risk. And most executive teams don’t pay attention to either one. Yet. ... This is the significant challenge that tech debt brings to an organization. It hides under the cover of “working systems” in the background. The byproduct of tech debt is systemic risk. These aging platforms carry with them the risk of failing due to aging infrastructure and unreliable hardware, and even more importantly they have the chance of being unable to support new workflows due to poor data structures and limited connectivity options for new data pipelines. So the systemic risk builds quietly, behind the scenes, while businesses function seemingly smoothly. 



Quote for the day:

''Our expectation in ourselves must be higher than our expectation in others.'' -- Victor Manuel Rivera

Daily Tech Digest - October 27, 2023

Quishing is the new phishing: What you need to know

Consider the QR code aired during the Super Bowl. Now, imagine the company behind that commercial had malicious intent (just to be clear, the company behind that commercial did not have malicious intent). Say, for example, the QR code displayed during the ad opened your phone's browser and automatically downloaded and installed a piece of ransomware. Given the number of people who watch the Super Bowl, the outcome of that attack could have been disastrous. That's quishing. ... We've all just accepted the QR code. And, to that end, we trust them. After all, how harmful can a simple QR code be? The answer to that question is…very. And cybercriminals are counting on the idea that most consumers always assume QR codes are harmless. Those same criminals also understand that their easiest targets are those on mobile phones. Why? Because most desktop operating systems include phishing protection. Phones, on the other hand, are far more vulnerable to those attacks. At the moment, most quishing attacks involve criminals sending a QR code via email. 


Boardrooms losing control in generative AI takeover

The theme of people adopting GenAI within their workplaces without oversight from IT and security teams or leadership, a trend we might reasonably term shadow AI, is not a new one as such. Earlier this year, an Imperva report drew similar concerns, stating that an insider breach at a large organisation arising from someone using generative AI in an off-the-books capacity was only a matter of time. However, given the steadily widening scope and ever-growing capability of generative AI tools, organisations can no longer afford not to exert, at the very least, minimal oversight. “Much like bring-your-own-device [BYOD], gen AI offers massive productivity benefits to businesses, but while our findings reveal that boardroom executives are clearly acknowledging its presence in their organisations, the extent of its use and purpose are shrouded in mystery,” said Kaspersky principal security researcher, David Emm. “Given that GenAI’s rapid evolution is currently showing no signs of abating, the longer these applications operate unchecked, the harder they will become to control and secure across major business functions such as HR, finance, marketing or even IT,” said Emm.


Privacy vs convenience – which comes out ahead?

There is a mutual responsibility between employees and employers, so trust and openness are essential. On the one hand, employees must be discerning about the digital tools they employ, understanding the permissions they grant and the third parties that might gain access to their data. They also need to accept that their personal choices can impact the security of the organization too. This requires awareness and a commitment to regular audits of personal digital spaces, ensuring that no unwanted entities are lurking in the shadows. Conversely, organizations bear the responsibility of being forthright about their data practices. Companies that are transparent about the data they access - and, more importantly, why they access it—stand out as beacons of integrity. This transparency extends beyond mere access; it encompasses the entire data lifecycle, from collection to storage, usage, and eventual disposal. By openly communicating these practices, enterprises can foster a culture of trust with their employees – and comply with regulatory standards too.


How to Speed Cyberattack Discovery

A fast and reliable way to identify cyber threats is with proactive threat hunting, which utilizes human defenders armed with advanced detection and proactive response technologies and approaches, says Mike Morris, a Deloitte risk and financial advisory managing director via an email interview. “In particular, threat hunting, during which human defenders actively maneuver through their networks and systems to identify indicators of a network attack and preemptively counter these threats, can speed the discovery of cyberattacks.” Yet he warns that for threat hunting to function optimally, it’s necessary that specific, relevant, and accurate intelligence is coupled with automation to identify and mitigate the adversary’s activities. When deploying human-based threat-hunting capabilities, it’s helpful to think about the parallels to physical security leading practices, Morris says. “For example, human security guards, tasked with protecting critical assets, constantly inspect physical infrastructures and maintain the integrity of their responsible spaces by actively patrolling and investigating,” he explains.


The Financial Consequences of Inadequate Data Governance

Low-quality data can severely impact decision-making and operational efficiency. Inaccurate or incomplete data can lead to flawed strategies, missed opportunities, and ultimately, financial losses. A sales team relying on outdated customer information could waste time on leads that have already been converted or are no longer relevant, leading to lost sales opportunities and financial losses. Similarly, a marketing team using incorrect customer segmentation data could end up targeting the wrong audience, wasting advertising budget, and missing revenue targets. These real-world scenarios further illustrate the cost of poor data quality. The examples highlight the tangible impact of data quality issues on an organization's bottom line. ... Data breaches can have devastating financial consequences for organizations. The direct costs include legal fees, fines, and customer compensation, which can run into millions of dollars. Indirect costs, such as reputational damage and loss of business, can be even more damaging in the long run. 


Change Management for Zero Trust

Change management isn’t any different for Zero Trust than it is for any other big initiative. But most of us aren’t very good at change management. And security and cybersecurity are not sexy. And most people want their security to be minimally invasive and as unnoticeable as possible. And most leaders get no top-line/bottom-line joy from spending money on Zero Trust initiatives. And Zero Trust doesn’t drop new features and functionality for a product at the end of a sprint. ... Three key areas you can focus on as you get started:Get leadership engaged – If the culture of your organization is driven by urgency, craft a message and plan that leverages urgency. If the culture is driven through aspiration, use aspirational vision and goals. Either way, get leadership on board to deliver the message. Create a communications strategy – The strategy must include the rhythm and mode of communications, as well as the context and content of the communications for leadership and sponsors, leads and key centralized players, local mavens, and users. Persuasive communication is what the marketing team does well. Get them involved.


UK Prime Minister announces world’s first AI Safety Institute

"The British people should have peace of mind that we're developing the most advanced protections for AI of any country in the world," Sunak said. "I will always be honest with you about the risks, and you can trust me to make the right long-term decisions." The AI Safety Institute will assess and study these risks — from social harms like bias and misinformation, through to the most extreme risks of all - so that the UK understands what each new AI model is capable of, Sunak added. "Right now, we don't have a shared understanding of the risks that we face. Without that, we cannot hope to work together to address them." The UK will therefore push hard to agree to the first ever international statement about the nature of AI risks to ensure that, as they evolve, so does shared understanding about them, Sunak said. "I will propose that we establish a truly global expert panel to publish a State of AI Science report. Of course, our efforts also depend on collaboration with the AI companies themselves. Uniquely in the world, those companies have already trusted the UK with privileged access to their models. That's why the UK is so well-placed to create the world's first Safety Institute."


DNS Under Siege: Real-World DNS Flood Attacks

During H1 2023 there was a surge in DNS flood attacks affecting multiple organizations around the world. Looking at the DNS attacks and their characteristics we suspect that they belong to one or more global DNS attack campaigns. We were able to identify significant DNS attack activity world-wide and were able to identify correlations between attack events to different DNS servers. ... The organizations that were targeted by these DNS flood attacks all belong to the financial and commercial segment. We detected and mitigated DNS flood attacks targeting banks, retailers, insurance etc. This can hint on the DNS campaign’s agenda, and help similar organizations protect their DNS services. ... There was a single attack vector repeating in all DNS attacks and that is the random subdomain, also known as DNS water torture. Interestingly, we identified different types of invalid subdomains used in the attacks. In some cases, it was purely random with high entropy1 including case randomization and in other cases the attack subdomains were human-readable, but non-existent.


FTC eyes competitive practices for genAI

Generative AI is set to become one of the world’s most dominant industries. One projection puts the market at $76.8 billion by 2030, up from a current valuation at $11.3 billion. Goldman Sachs, for its part, boldly says the technology could drive a 7% (or nearly $7 trillion) increase in global GDP. Amidst all this, the FTC says issues could arise around control over one or more of the “key building blocks” of generative AI: data, talent and computational resources. If a single company or handful of firms controlled one of these essential inputs, “they may be able to leverage their control to dampen or distort competition,” the agency asserts. “And if generative AI itself becomes an increasingly critical tool, then those who control its essential inputs could wield outsized influence over a significant swath of economic activity.” In particular, the agency said firms could bundle and tie products — offering multiple products in a single package or conditioning the sale of one product on the purchase of another, respectively. 


Keys to effective cybersecurity threat monitoring

Over the past few years, attackers have adjusted their tactics, finding success in targeting employees with the intent of stealing their credentials. Social engineering tactics such as phishing often catch individual users out, leading to passwords, financial information and other sensitive data being breached. “In the past, they might have relied on attacking infrastructure directly through vulnerabilities or brute force attacks. While they can still happen, these attacks run a high risk of discovery before the bad actor can get in,” explained Hank Schless, director of global security campaigns at Lookout and host of the Security Soapbox Podcast. “Now, attackers are targeting individuals who likely have access to large sets of valuable cloud data. They do this with the intention of stealing those individuals’ credentials via mobile phishing attacks in order to be able to enter the organisation’s infrastructure discreetly under the guise of being a legitimate user. “This creates massive issues with monitoring for threats, because the threat looks like it’s coming from the inside if an attacker is using stolen credentials.”



Quote for the day:

"It is the capacity to develop and improve their skills that distinguishes leaders from followers." -- Warren G. Bennis

Daily Tech Digest - October 26, 2023

CTOs Look to Regain Control of the IT Roadmap

Putting an emphasis on modular architecture and open standards can ensure easier integration or disengagement from specific solutions, thereby mitigating these concerns. ... instead of an expensive and time consuming “rip and replace” model, organizations are extending the life and value of their existing ERP investments and shifting their newly freed up resources to drive innovation “around the edges” of their current robust ERP core. “This approach applies to all industries and sizes, enabling organizations to minimize churn and focus on customer value, competitive advantage and growth,” he says. The survey also indicated IT leaders are exploring alternatives to subscription-based licensing models, focusing on optimizing operational costs and aligning investments with business strategies for growth and innovation. “Applications that enable competitive advantage and differentiate a company are a high priority for organizations, while for example, ERP administration functions like HR and finance offer very little differentiation and are frequently retained as a foundational core, optimized for cost and efficiency,” Rowe explains.


Measure Developer Joy, Not Productivity, Says Atlassian Lead

So, when senior leadership is under pressure to show the outcome of one of their most sizable operating expenses, what’s a tech company to do? First, Boyagi suggested, change your questions. Instead of “How do I increase developer productivity?” or “How can I measure developer productivity?” try “How can I make developers happier?” and “How can I help developers be more productive?” The questions can help steer the conversation in a more useful direction: “I think every company has to go on a journey and do what’s right for them in terms of productivity. But I don’t think I think measurement is the thing we should be talking about.” First, because productivity for knowledge workers has always been one of the hardest things to measure. And, he added, because we need to take inspiration from other companies, not replicate what they do. Boyagi doesn’t suggest you try to do what Atlassian does. But feel free to take inspiration from and leverage its DevEx strategy, as well as those from the likes of other high-performing organizations like Google, Netflix, LinkedIn and Spotify.


How much cybersecurity expertise does a board need?

For companies who have still not yet built up the cybersecurity expertise among its directors and reporting committees, there’s work to do, says Lam, who explains there are a number of ways to build up that "cyber-IQ". “One is you should get the right board talent in terms of risk and cyber expertise that’s appropriate to their risk profiles,” says Lam, who explains that companies leery of using up a hotly contested director seat for a cyber specialist simply need to broaden their recruitment parameters. ... As organizations slowly morph their board composition, they also need to be careful to not get into a situation where one director is solely responsible for cybersecurity oversight and no one else minds that area of risk, warns Chenxi Wang ... “There’s been an explosive offering of cyber governance training in recent years. While that is a great step in the right direction, a lot of them vary as far as the quality of content goes,” Shurtleff tells CSO. “You can’t substitute somebody’s cyber experience and knowledge from a lifetime of professional experience into a two-week course. ...”


What is a business intelligence analyst? A key role for data-driven decisions

The role is becoming increasingly important as organizations move to capitalize on the volumes of data they collect through business intelligence strategies. BI analysts typically discover areas of revenue loss and identify where improvements can be made to save the company money or increase profits. This is done by mining complex data using BI software and tools, comparing data to competitors and industry trends, and creating visualizations that communicate findings to others in the organization. ... It’s a role that combines hard skills such as programming, data modeling, and statistics with soft skills such as communication, analytical thinking, and problem-solving. Candidates need a well-rounded background to balance the line between IT and the business, and usually a bachelor’s degree in computer science, business, mathematics, economics, statistics, management, accounting, or a related field. If you have a degree in an unrelated field but have completed courses in these subjects, that can suffice for an entry-level role in some organizations. Other senior positions may require an MBA, but there are plenty of BI jobs that require only an undergraduate degree.


Infrastructure teams need multi-cloud networking and security guardrails

The key is to ensure that the technology implemented is actually providing a guardrail and not imposing a speedbump or roadblock. Network and security teams need to provide infrastructure and services that are programmatic and easy to use. For instance, DevOps should be able to request IP addresses, spin up secure DNS services, request changes to firewall policies, or adjust transit routing with a couple clicks. If approvals are required from network and security teams, those approvals should be automated as much as possible. This drive toward programmatic services is apparent in my research at Enterprise Management Associates (EMA). For instance, I recently surveyed 351 IT professionals about their multi-cloud networking strategies for the report “Multi-Cloud Networking: Connecting and Securing the Future.” (Check out EMA’s free webinar to learn more about what we found in that research). In that report, 82% of respondents told us that it was at least somewhat important for their multi-cloud networking solutions to have open APIs.


Demystifying the top five OT security myths

“A common belief is that the OT protocols are proprietary, and the attacker doesn’t have access to OT devices or specific proprietary protocols,” he said. “To some extent, the proprietary nature of the OT device does pose a challenge to hacking, but threat actors behind targeted attacks are usually knowledgeable, persistent and resourceful.” Goh said such threat actors, particularly those backed by nation-states, have the resources to replicate an OT system, and create and rigorously test their malware in a lab before launching an attack. “This possibility is highly speculated in the Triton malware attack, which happened in 2017 in a malicious attempt to destroy and damage a petrochemical plant in Saudi Arabia by targeting the safety system,” he added. ... In the concept of defence-in-depth, firewalls are used to separate the different layers of an OT network. Goh said while it is mandatory to use firewalls to protect an OT network from unauthorised access, this protection is only as good as the policy and the security of the firewall. “We all know that misconfigurations of firewall rules happen and are not uncommon,” he said, citing a study that found one in five firewalls have one or two configuration issues.


JPMorgan Chase CISO explains why he's an 'AI optimist'

We've started to look at it. That's the short answer. The longer answer is, I was a bit of an AI pessimist before November of last year. Seeing ChatGPT in action for the first time and what it could do opened my mind -- perhaps many others' as well. It felt like we tipped over the precipice of an AI era. I'm an optimist about its capabilities. Most of the last nine or 10 months or so have been us trying to enable AI to use inside of the firm. We have been users of traditional AI for some time. Generative AI is newer for us in the business. We've spent the last six or seven months designing the right isolated mechanisms that are safe for us to use to produce our data. That's something we'll start doing internally as a business more broadly and think through how we use it as a cybersecurity use case. It's probably not going to be done in a generic sense in the short-term. Cybersecurity practitioners and maybe some industry consortiums need to get together to build and train the right models to support cybersecurity. It's clear to me that one, everybody's thinking about how they use AI in their tech. 


CISOs struggling to understand value of security controls data

Understanding where security controls are failing is a critical first step to mitigating cyber risk and making the right decisions. Unfortunately, only 36% of security leaders are totally confident in their security data and use it for all strategic decision making. This is a concerning finding, as without trusted data CISOs might struggle to influence senior business stakeholders and ensure the right people are held accountable for fixing security issues. ... The benefits of improving data quality and trust are clear, with 84% of security leaders believing that increasing trust in their data would help them secure more resources to protect their organization. But first there needs to be a mindset change in security leaders and the board—away from using controls data for reporting, and instead embracing it to proactively drive business decisions and stop problems before they occur. “The industry needs to change if we are to solve the CISO security controls conundrum, and Continuous Controls Monitoring (CCM) can be the catalyst. It isn’t a better reporting tool, it’s a way of knowing what to do next – making day-to-day cybersecurity firefighting easier and getting ahead of the game on strategic risk,” argues Panaseer Security Evangelist, Marie Wilcox.


How to Become a Data Governance Specialist

Generally, a DG specialist will have a bachelor’s degree in a field related to computers (information technology, computer science) and one to four years of experience. However, a combination of computer and communication skills is needed for this position. Lots of technical experience can stand in for a bachelor’s degree, but the lack of a degree will limit chances for advancements and promotions. Some employment advertisements will require a Data Governance and Stewardship certification. The certification process typically requires a degree, attending a workshop, a test, and a fair amount of experience. Certification can be difficult to get, in part because there are very few organizations offering it. This requirement may be an unrealistic expectation on the part of the employer, particularly for non-management positions. ... Much of Data Governance is actually about changing habitual behavior. When changes are made, it is common for a team to be assembled to execute the project. A Data Governance program must be presented as a practice and not a project. Projects have start and end dates. 


Has Your Architectural Decision Record Lost Its Purpose?

Sometimes the expected longevity of a decision causes a team to believe that a decision is architectural. Most decisions become long-term decisions because the funding model for most systems only considers the initial cost of development, not the long-term evolution of the system. When this is the case, every decision becomes a long-term decision. This does not make these decisions architectural, however; they need to have high cost and complexity to undo/redo in order for them to be architecturally significant. To illustrate, a decision to select a database management system is usually regarded as architectural because many systems will use it for their lifetime, but if this decision is easily reversed without having to change code throughout the system, it’s generally not architecturally significant. Modern RDBMS technology is quite stable and relatively interchangeable between vendor products, so replacing a commercial product with an open-source product, and vice versa, is relatively easy so long as the interfaces with the database have been isolated.



Quote for the day:

"The task of leadership is not to put greatness into humanity but to elicit it, for the greatness is already there." -- John Buchan

Daily Tech Digest - October 24, 2023

7 mistakes to avoid when developing RPAs

“The biggest mistake when using RPA is to fall into the trap of thinking it can automate processes, and in reality, RPA is more accurately robotic task automation (RTA),” says Aali Qureshi, SVP of Sales for the Americas at Kissflow. “RPA bots are great for automating individual, repetitive vertical tasks, but if you want to create and automate more complex horizontal processes that span an entire enterprise, you need a low-code or no-code automation tool that allows you to automate tasks and processes in order to skip hand-coding.” ... It’s not only exceptions that can be problematic, especially when deploying bots to support critical business processes. The next mistake to avoid is deploying bots to production without data validation, error detection, monitoring, and alerting. “RPA is relatively easy as long as one can assume it works correctly, or if it doesn’t, no damage will be done. But malfunctioning RPA can make a huge number of errors in a very short time,” says Hannula. One best practice is centralizing bot monitoring and alerting with the devops or IT ops teams responsible for monitoring applications and infrastructure.


How to ask the board and C-suite for security funding

Risk acceptance is the board's prerogative. So, Budiharto advises CISOs to calculate and communicate the cost of not implementing the solution, including the likelihood of a breach or exposure, and the full financial impact of such a breach or exposure (from direct losses to cleanup costs) should the funding request be denied. "To the CFO, those savings should far outweigh the TCO of implementing and managing the solution," she adds. Putting it all together, she describes a scenario where a new solution needs to be added to the existing EDR to stop ransomware in its tracks, kill it, and remediate it faster and more thoroughly than their existing EDR does. "The board will ask, 'How is that related to the bottom line?' So, I calculate the loss of revenue in productivity and loss of business and multiply that by the average days of trying to resolve a ransomware attack under the current EDR system," Budiharto explains. "These types of comparisons will help the board see the big picture, including how your solution will help avoid that big expense."


Gartner: CIOs must prepare for generative AI disruption

Beyond business leaders, Gartner noted that governments also have put in place a strong commitment to AI and are prioritising strategies and plans that recognise AI as a key technology in both private and public sectors. This includes incorporating AI into long-term national planning, which is being reinforced through the implementation of corresponding acts and regulations to bolster AI initiatives. “Implementation at a national level will solidify AI as a catalyst for enhancing productivity to boost the digital economy,” said Plummer. “Successful implementation of large-scale AI initiatives necessitates the support and collaboration of diverse stakeholders, showcasing the mobilisation and convening ability of national resources.” Among the key application areas for CIOs and IT leaders is the ability for generative AI to help IT departments manage older systems. According to Gartner, generative AI tools will be used to explain legacy business applications and create appropriate replacements, reducing modernisation costs by 70%, by 2027.


CIOs assess generative AI's risk and reward for software engineers

While most CIOs are choosing to keep generative AI tools away from production environments, it might not be long before IT professionals start using generative AI for disparate elements of the software development and engineering process. "The main message I have is to get your staff up to date and put the resources into training, and then take advantage of it," she says. "It's incredible what you can do with code generation now. I could build an entire application without knowing any JavaScript or how to code. But you must be educated on all the pluses and the minuses -- and that doesn't happen overnight." That's a sentiment that resonates with Omer Grossman, global CIO at CyberArk. In an interview with ZDNET, he suggests now is the time to start exploring generative AI. "Leaders should make decisions," he says. "And I'm emphasizing that point because if you don't make any decisions because you are risk-averse, you risk missing out." For business leaders who are thinking about how to use generative AI in areas such as software development and engineering, Grossman suggests a range of steps.


Closing ‘AI confidence gap’ key to powering its benefits for society and planet

The research by BSI, the UK-headquartered business improvement and standards company, was commissioned to launch the Shaping Society 5.0 essay collection, which explores how AI innovations can be an enable that accelerates progress. It highlights the importance of building greater trust in the technology, as many expect AI to be commonplace by 2030, for example, automated lighting at home (41%), automated vehicles (45%) or biometric identification for travel (40%). A little over a quarter (26%) expect AI to be regularly used in schools within just seven years. Interestingly, three-fifths of the respondents globally (61%) want international guidelines to enable the safe use of AI, indicating the importance of guardrails to ensure AI’s safe and ethical use and build trust. For example, safeguards on the ethical use of patient data in healthcare are important to 55% of the respondents of the survey globally. Engagement with AI is markedly higher in two of the fastest-growing economies 1. China (70%) and India (64%) already use AI every day at work.


Exponential Thinking: The Secret Sauce Of Digital Transformation

The first crucial step in embracing exponential thinking is to reframe your relationship with fear and failure. We often view challenges or setbacks as threats, paralyzing us into inaction. Instead, reframe your fears as opportunities for learning and growth. When faced with a challenge, ask yourself questions like, "What can I learn from this?" or "How can this experience help me grow?" This shift in perspective will make you more resilient and open to new experiences, which is the core foundation for exponential thinking. ... Exponential thinking, which leads to exponential growth, rarely happens in isolation; it's a team effort. Make it a point to regularly interact with people outside your immediate team and field of expertise; connect with folks from different departments and even different fields. Whether it's through inter-departmental meetings, cross-functional projects or internal hackathons, the fusion of different perspectives can ignite innovative solutions with exponential potential. In a world aiming for exponential success, an organizational culture that champions team collaboration across all departments is not just beneficial—it's imperative.


Hackers Hit Secure File Transfer Software Again and Again

Vulnerabilities continue to surface in file transfer tools. In May, Australian cybersecurity firm Assetnote alerted Citrix to a critical vulnerability in the ShareFile storage zones controller, or SZC, in its cloud-based secure file-sharing and storage service known as Citrix Content Collaboration. Citrix patched the flaw on May 11, notified customers directly about the vulnerability and helped them lock it down. Citrix also blocked unpatched hosts from connecting to its cloud component, thus limiting any hacking impact to a customer's own environment. The U.S. Cybersecurity and Infrastructure Security Agency warned in August that the Citrix ShareFile vulnerability was being actively exploited by attackers. ... Security experts have warned users of secure file transfer software to safeguard themselves, given the risk of more such attacks perpetrated by Clop or copycats. One challenge with Clop's attacks is that the group has somehow continued to obtain access to zero-day vulnerabilities in the products, meaning even fully patched software could be - and was - exploited.


How Do We Manage AI Hallucinations?

The analogy between fictitious responses produced by a machine and sensory phenomena in humans is clear: Both produce information that is not grounded in reality. Just as humans experiencing hallucinations may see vivid, realistic images or hear sounds reminiscent of real auditory phenomena, LLMs may produce information in their “minds” that appear real but is not. ... While the ultimate causes of AI hallucinations remain somewhat unclear, a number of potential explanations have emerged. These phenomena are often related to inadequate data provision during design and testing. If a limited amount of data is fed into the model at the outset, it will rely on that data to generate future output, even if the query is reliant on an understanding of a different type of data. This is known as overfitting, where the model is highly tuned to a certain type of data but incapable of adapting to new types of data. The generalizations learned by the model may be highly effective for the original data sets but not applicable to unrelated data sets.


When your cloud project is over budget

This is likely your fault since you did not plan well and missed many things that became unexpected costs or delays. Also, there are known budget issues around migrating or developing new systems and how much they cost to operate after being deployed. We’re talking about both. Not everyone is an excellent planner, but there is a discipline to project management, including metrics and estimation approaches, that most IT projects choose to ignore. They provide a rough estimate of how long and how much money should be needed to do something meaningful in the cloud. Ignoring these guidelines is never good, so let’s learn from our mistakes and improve project planning. ... Engage in proactive communication with your cloud service providers to discuss your situation and explore any potential options for cost reduction. Yes, this means begging for a discount. Providers may offer flexible pricing plans, reserved instances, or cost optimization guides since it’s their system. Also, this may mean that you have to agree to future commitments for cloud service usage that may be out of this budget period. This could be an ethical or policy no-no at your company, so check with the CFO.


Bracing for AI-enabled ransomware and cyber extortion attacks

In addition to state-sponsored attacks by APTs, governments must deal with their fair share of criminal activity as well, particularly at lower levels of government where cybersecurity resources are especially scarce. This includes attacks against police departments, public schools, healthcare systems, and others. These attacks ramped up in 2023, a trend we expect to continue as cybercriminals look to easy targets from which to steal sensitive data like PII. Ransomware groups’ success is often less about technological sophistication and more about their ability to exploit the human element in cyber defenses. Unfortunately, this is exactly the area where we can expect AI to be of the greatest use to criminal gangs. Chatbots will continue to remove language barriers to crafting believable social engineering attacks, learn to communicate believably, and even lie to get what they want. As developers release ethically dubious and amoral large language models in the name of free speech and other justifications, these models will also be used to craft novel threats.



Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison

Daily Tech Digest - October 22, 2023

The AI Evolution Will Happen Faster Than Computing Evolution

Compute will still evolve, but how fast is the question? The Internet led to the massively distributed data center approach, which we know as the cloud, a terrible term, but I digress. But today the power of computing can only increase so much. Moore’s Law looks increasingly impossible to keep pace with as we develop transistors the size of an atom. Infrastructure limitations are causing all sorts of headaches for software vendors who now face a litany of options for maximizing AI systems to be more efficient with precious compute resources. ... It’s all about the data and its compounding growth. Having transition data ready and analytic, too, with speed and efficiency, makes for the ability to scale AI systems. As we’ve seen, AI systems must be fast and SingleStore markets that capability with its in-memory and disk capabilities. There’s also the flexibility that customers demand — a hybrid approach that cuts across cloud services and on-premises. With SinglStore vector indexing and JSON handling, the capabilities opened further. 


Preparing for the Shift to Platform Engineering

To effectively support the transition, leaders must commit to a culture of platform engineering. Simply adopting technology isn’t enough. It needs to be backed by a thorough strategy that allows developers to truly benefit from the tools and structures of platform engineering. What does this look like? Success requires leaders and developers to encourage collaboration and break down silos between operations and development teams. It’s possible to build a bridge between developers and operations by committing to cloud migration, creating a centralized platform and investing in collaborative tools and the strategy to back it up. To engage in platform engineering requires dedication to a collaborative culture instigated from the top, empowered by overall strategic decisions and operations. This includes continued learning for developers to stay on top of new languages, trends, challenges and priorities, internally and externally. Teams are more successful when they utilize performance metrics to track workflows that help them conduct effective maintenance and improve on a consistent and ongoing basis.


Data Governance in action: the CDO, the CISO and the Perks of collaboration

Maintaining independent reporting structures for the CDO and CISO, separate from the Chief Information Officer (CIO), is crucial. That’s because when they report directly to the executive leadership or the CEO, they can provide independent updates on data governance and cybersecurity, ensuring clarity and objectivity in decision-making for critical data-related matters. Due to this arrangement, senior management will have a holistic view of risk management, compliance, and strategic decision-making, without any biases that may arise from reporting to the CIO. Biases, in this context, can manifest in several ways. For example, a CIO might prioritise IT initiatives that align with the department’s goals or budget constraints, potentially overlooking or downplaying certain data governance or security concerns. Hence, this hierarchical reporting structure, with the CIO in the middle, can unintentionally filter or influence the information that reaches senior management, which could impact their ability to make well-informed, impartial decisions.


North Korean hackers are targeting software developers and impersonating IT workers

Diamond Sleet was observed using two attack paths: the first consisted in the deployment of ForestTiger backdoor while the second deployed payloads for DLL search-order hijacking attacks. Onyx Sleet used a different attack path: After successfully exploiting the TeamCity vulnerability, the threat actor creates a user account (named krtbgt), runs system discovery commands and finally deploys a proxy tool named HazyLoad to establish persistent connection. “In past operations, Diamond Sleet and other North Korean threat actors have successfully carried out software supply chain attacks by infiltrating build environments,” Microsoft noted. North Korean state-sponsored hackers have been linked to a social engineering campaign targeting software developers through GitHub. By pretending to be a developer or a recruiter, the attacker managed to convince the victim to collaborate on a GitHub repository and ultimately download and execute malware on its device.


Five key questions about disaster recovery as a service

Almost any organisation can use DRaaS because it requires little in the way of hardware or up-front investment. However, its use is most common in organisations that want to minimise downtime, but cannot justify investment in redundant hardware, either on-premise or in a datacentre or colocation facility. This is likely to involve a trade-off between performance and recovery times, and cost. DRaaS that runs in the public cloud will be slower than dedicated systems, but it will still be faster to recover from than basic cloud-based backup or BaaS. Another application for DRaaS is where conventional DR systems are less practical. This includes branch and remote offices that may have lower bandwidth connections and little in the way of on-site IT support. There is also a trend towards use of DRaaS to provide resilience for cloud-based infrastructure. Such cloud-to-cloud disaster recovery can range from replicating entire cloud production environments or specific VMs to a secondary cloud location, to providing additional redundancy and continuity for SaaS applications and even Microsoft 365.


Blue-Green Deployment: Achieving Seamless and Reliable Software Releases

In order to reduce risks and downtime when releasing new versions or updates of an application, blue-green deployment is a software deployment strategy. It entails running two parallel instances of the same production environment, with the “blue” environment serving as a representation of the current stable version and the “green” environment. With this configuration, switching between the two environments can be done without upsetting end users. without disrupting end-users. The fundamental idea behind blue-green deployment is to automatically route user traffic to the blue environment to protect the production system's stability and dependability. Developers and QA teams can validate the new version while the green environment is being set up and thoroughly tested before it is made available to end users. ... The advantages of blue-green deployment are numerous. By maintaining parallel environments, organizations can significantly reduce downtime during deployments. 


Shaping the Future of Hybrid Quantum Algorithms for Drug Discovery

One of the main challenges of drug discovery is simulating the interaction between molecules to, for instance, predict the potency of a drug. Accurately simulating the behavior of a single molecule is tricky since the number of possible interactions with other molecules skyrockets as you increase the overall number of molecules. Computer-aided drug discovery has been around for about 40 years. However, due to limited computational powers, the first software packages had to simplify the physics and depended a lot on experimental validation—which is, to this day, a lot of trial and error. As the computational power of computers increases, and as physics models become more and more complex, we’ll be able to run more accurate simulations that not only spare us a lot of experimental testing but also allow us to develop entirely new drugs. Simplistic models haven’t previously tapped a vast chunk of the chemical search space. Quantum computing is still very early, and quantum computers have yet to demonstrate a practical advantage over supercomputers. 


A technology lawyer suggests how artificial intelligence can benefit every Indian tangibly

As impressive as AI has been so far, we are, at the time of this writing, on the brink of yet another transformation that promises to be even more dramatic. Over the past year or so, remarkable improvements in the capabilities of large language models (LLMs) have hinted at a new form of emergent ‘intelligence’ that can be deployed across a range of applications whose full scale and scope will only become evident over time. So powerful is the potential of this new technology that some of the brightest minds on the planet have called for a pause in its development out of the fear that it will lead to a SkyNet future and the genuine threat of unleashing malicious artificial general intelligence. LLMs are computer algorithms designed to generate coherent and intelligent responses to queries in a humanlike conversational manner. They are built on artificial neural networks that have typically been trained on massive data sets that allow them to learn language structure. LLMs can learn without being explicitly programmed. 


Team Topologies: A Game Changer For Your Data Governance Organization

Managing data is not only a technological task, but also an organizational one. It requires successful coordination and collaboration between different teams and stakeholders. Here, priorities, goals, and perspectives often differ, making it difficult to establish effective work processes and communication structures. Another key aspect is the clear definition of roles – such as the role of a data architect or the role of a master data manager – and their responsibilities in the context of the data organization. Without clear structures, misunderstandings and conflicts can arise, negatively impacting data management efficiency and business processes. Given these challenges, implementing effective data management and data governance practices sometimes seems daunting. However, it is a critical factor in the success of data-driven organizations, and strategies exist to overcome these challenges. One promising strategy is to apply innovative collaboration models and team structures.


Soft Skills Play Significant Role in Success of IT Professionals

A person with strong problem-solving skills typically demonstrates the ability to analyze complex issues systematically, break them down, and identify effective solutions, according to Haggarty. "They showcase critical thinking, resourcefulness, and a willingness to explore alternative approaches," she noted. "Effective problem-solvers are also skilled in evaluating potential consequences and making informed decisions." In addition, their capacity to collaborate with diverse teams also contributes to successful problem-solving in dynamic work environments. In the tech industry, networking facilitates idea exchange and exposure to diverse perspectives. Haggarty said networking is highly ranked due to its potential to foster collaboration, knowledge sharing, and professional growth. "Establishing strong professional relationships can lead to opportunities for collaboration, career advancement, and staying informed about industry trends," she said. "It can also aid with problem-solving by connecting individuals with complementary skills to address multifaceted challenges."



Quote for the day:

''If my mind can conceive it, my heart can believe it, I know I can achieve it.'' -- Jesse Jackson