Daily Tech Digest - December 27, 2023

Artificial ethics: Programmed principles or cultivated conviction?

Are AI systems developing generalisable ethical principles? Evidence suggests limited abilities to contextually apply concepts like privacy rights and informed consent. Or is ethical behavior just pattern recognition of scenarios labeled “unacceptable” by training data? Risk of overreliance on surface-level input/output mapping without philosophical grounding. Compare this rules-based approach to the human internalisation of ethical frameworks tied to justice, rights, duties, and harms. ... Their reasoning happens within limited data slices. This opacity around applied judgment represents a major trust gap. We cannot investigate when AI should make independent decisions in ethically ambiguous areas versus defer to human oversight due to understandable limitations in their moral literacy. Bridging this chasm requires architecting comprehensive ethical infrastructure across data sourcing, model design, and product applications. Ethics must permeate the entirety of systems, not follow as an afterthought. Careful scrutiny into reasoning behind AI choices can uncover areas for instilling principled priorities over transitory rules.


‘Merchants of Complexity’: Why 37Signals Abandoned the Cloud

With this ease of cloud computing comes a certain loss of independence. When a cloud provider suffers a massive outage, the customers are helpless to do anything for their own users. Hightower and DHH recalled a series of outages on the Google Cloud Platform that was so bad, it spurred 37Signals to move everything over to AWS. “The sense of desperation you feel when everything is out of your control, and there’s literally nothing we can do in the moment to fix it, is just so disheartening,” DHH said. And moving a workload, and its associated data, from one cloud to another is far from a trivial, or inexpensive task. DHH noted that it cost 37signals “hundreds of thousands of dollars” to move 6 to 7 petabytes of data from GCP, due to egress costs. “This whole idea that the cloud is going to give you mobility was not really true,” DHH said. ... DHH related how you can see $600,000 of Dell servers, out there on a loading dock somewhere. Whereas with the cloud, you are never sure where the money goes. You can click a button to spin up an authorization service, forget about it and let it run up thousands of dollars in monthly charges on the corporate account.


If you don’t already have a generative AI security policy, there’s no time to lose

Over time, security teams have tried to reign in shadow IT with policies that mitigate the plethora of risks and challenges it has introduced, but many remain due to its scale. Figures from research firm Gartner revealed that 41% of employees acquired, modified, or created technology outside of IT's visibility in 2022, while 2023 shadow IT and project management survey from Capterra found that 57% of small and midsized businesses have had high-impact shadow IT efforts occurring outside the purview of their IT departments. Although generative AI is quite a different thing, it's taking off far quicker than shadow IT did. The lesson is that security-focused policies should be put in place in the early stages as new technology use grows and not after it reaches an unmanageable scale. Adding to the pressures are the potential security risks generative AI can insert into businesses if unmanaged, which are very much still being understood. ... The problem is that most organizations, regardless of size or industry, are experiencing the same challenge around how to control and manage the secure use of generative AI, Thacker says. 


The Silver Bullet Myth: Debunking One-Size-Fits-All Solutions in Data Governance

Customized Data Governance frameworks streamline Data Management processes, allowing them to better align with specific organizational workflows. This alignment drives an increase in the overall efficiency of operations and reduces redundancies, saving both time and resources. The result – minimizing errors, making the ship run more smoothly, and cost savings – is a complete win-win scenario. Effective Data Governance is also an instrumental factor for managing risks such as breaches and misuse. Customized frameworks provide organizations with enough space to put together robust mechanisms for identifying, assessing, mitigating, and ultimately dealing with risks in a way that is tailored to the specific risk landscape in question. Another thing the proponents of the silver bullet approach disregard is the need for solutions for protecting rapidly moving data, as with same day ACH transfers, messaging apps, and real-time video call apps such as Zoom and Google Meet. As organizations evolve, so, too, do their Data Governance needs. Customized frameworks are scalable and adaptable, accommodating changes as the organization grows, enters new markets, or adopts new technologies.  


CIOs Battle Growing IT Costs with Tools, Leadership

CIOs can optimize IT spend by implementing more rigorous, strategy-aligned software approval processes aimed at avoiding duplicative spend and ensuring contracts are rightsized for the business needs. “The challenge and responsibility of CIOs is to be intentional with every dollar and investment by keeping the organization focused on the most important priorities instead of pursuing every exciting new idea,” she says. Mandell says IT leaders should encourage a culture of innovation and ideation, but they must also balance maintaining a strategic focus -- and communicate these goals across their own team and other areas of the organization. ... “Bridging the finance and engineering functions is hard work and you need both a team and a platform to effectively accomplish this,” she says. “When you do, you will be able to ensure and show that your cloud costs are being effectively managed.” ... When integrated effectively, AI-powered solutions can enhance decision-making and identify opportunities for optimization. “With the help of emerging technologies, CIOs and other budget decision makers will have greater visibility into spend, helping ensure resources are allocated strategically and IT environments are streamlined,” Mandell explains.


CIOs in financial services embrace gen AI — but with caution

AI is not the future of financial services — it’s the present. Genpact, a major business and technology services company that assists banks such as JP Morgan and Goldman Sachs, is already utilizing AI. “It’s really good at summarising, filling in blanks, and connecting dots, so generative AI is fit for purpose,” says Brian Baral, global head of risk at Genpact. “We’ve been able to leapfrog and do in months what had taken three years, but the data is key. Banks have to get ready to take the step forward.” Conscious of the recent history of disruption to financial services, the sector’s technology leaders are already looking for opportunities in AI. “Generative AI is starting off a new age of exploration in IT,” says Frank Schmidt, CTO at insurance firm Gen Re. Cugini at KeyBank agrees, and adds that the exploration has to include a cross-functional team from all areas of the business, not just IT. “We also pulled in some experts from Microsoft and Google to really understand what AI means to our sector.” Schmidt sees AI as having potential in process automation, particularly underwriting submissions. “AI will play a role in this workflow and classifying information,” he says.


NASA Releases First Space Cybersecurity Best Practices Guide

The guidance urges public and private sector organizations conducting space activities to establish a continuous process of mission security risk analysis and risk response in order to routinely identify and address security risks related to specific operations. NASA also advises organizations to apply the principles of domain separation and least privilege designs across their enterprises to better mitigate supply chain attacks and other operational vulnerabilities. Misty Finical, deputy principal adviser for enterprise protection at NASA, said the guidance "represents a collective effort to establish a set of principles that will enable us to identify and mitigate risks and ensure continued success of our missions, both in Earth's orbit and beyond." Reports detail a variety of challenges that organizations have faced in recent years while responding to emerging cybersecurity threats in space. A 2019 Government Accountability Office assessment found that the Department of Defense had struggled to adopt new approaches to protect U.S. satellites from cyberattacks by foreign adversaries and from the increasing threat of space debris.


How to incorporate human-centric security

The concept of human-centric security focuses on better management of the insiders that either inadvertently or maliciously cause so many of the threats that companies must deal with. Gartner recommends reducing friction caused by security strategies and starting to manage security risk. A human-centric approach to security not only takes the burden of security off the employee, it starts to look at the overall risk associated with certain behaviors and on improving the experience of employees. One way to look at this is as a trade-off. Allowing people to work remotely, for example, carries a certain security risk that needs to be weighed against the benefits of giving employees flexibility. However, another important way to look at risk is to analyze the behaviors that are most likely to lead to future threats and determine new ways to mitigate those risks to reduce future threats. By using insider risk management software, companies can better understand new work patterns of remote employees, track negative sentiment and flag access to sensitive data to proactively improve the company’s overall cybersecurity and employee experience.


AI: A Data Privacy Ally?

We can expect to see new technologies created to address the security and data privacy concerns in an AI world. Imagine consumers getting their own “AI Consent Assistant.” Such a tool would move us from static, one-time consent checkboxes to dynamic, ongoing conversations between consumers and platforms, with the AI Consent Assistant acting as a personal guardian to negotiate on our behalf. Or maybe AI tools could be developed to help security teams predict privacy breaches before they happen or proactively auto-redact sensitive information in real-time. We must think differently about AI in relation to data privacy – the future of data is not about how much we collect, but how ethically it is used and how we can realistically safeguard it so that we get the best out of AI without violating data privacy tenets. ... Transparency should never be a question– no one has to guess at what data is collected, why, how it is stored, or how to remove it. Before launching any new technology or platform, companies should assess the privacy impact, working to identify potential privacy issues and taking preventive measures from the start, as it remains quite difficult to retrofit privacy.


Security And Market Adoption Of Open Banking

With regard to the first element that ensures security, the European Banking Authority drafted regulatory technical standards for strong customer authentication in 2016. As specified by PSD2, strong authentication must rely on at least two key elements that are independent of one another. This is to ensure the disclosure or theft of one authentication element does not affect the overall security. ... As for the second element of security mitigation, the communication channel between third-party providers and banks, PSD2 paved the way for regulated application programming interfaces. The interface must allow third-party providers to identify themselves with banks when requesting access to accounts. This outcome establishes requirements and responsibilities that prevent third-party providers from using expired certificates, or not having them at all, when fetching data or transmitting a payment order. ... Building trust in open banking is an essential step toward achieving widespread adoption as well. Companies can share real-life examples, such as case studies and testimonials. These are powerful ways to showcase the benefits of open banking and building trust with customers. 



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 26, 2023

Generative AI is forcing enterprises and policymakers to rewrite the rules of cybersecurity

In a sense, creativity is the new hacker’s currency; it’s used to craft and execute attacks that traditional cybersecurity measures fail to detect and prevent. With 72 percent of white hat hackers believing they’re more creative than AI, it’s safe to assume that bad actors with similar skill sets only need a few creative muscles to cause material problems at scale. From persistent nagging to creative wordplay, hackers can trick an AI model to perform unintended functions and reveal information otherwise meant to be guarded. These prompts don’t need to be complex, and bad actors are constantly exploring new methods to get generative AI models to spill their secrets. The threat landscape for companies innovating with AI just got a lot more complex. So what should we do about it? Just like there are various ways to express a message in English, the same goes for LLM hacks. There are countless different ways to get an AI model to produce toxic or racist content, expose credit card information, or espouse misinformation. The only way to effectively protect AI apps from this volume of attack vectors is with data. A lot of it. Safeguarding against AI threats requires extensive knowledge of what those threats are. 


Rejection Doesn't Have to Be a Bad Thing. Here's How You Can Use It as a Tool for Success.

Pain is inevitable, but suffering is optional. Recognize that you have a choice in how you feel about rejection. Whatever story you tell yourself about rejection comes from you. It's up to you to interpret the information that exists in your world. You have the power to flip the script, change the narrative and tell yourself a different story. You can choose to view rejection as a good thing — it means you put yourself out there, asked a tough question and exuded courage. It means you got out of your comfort zone, which always helps us grow and evolve. It means you got to practice a skill (the skill of asking, influencing or selling). That practice will help you grow thicker skin and hone your craft, making you stronger and tougher. With that in mind, you can choose to view rejection as a good thing. ... Once you've been rejected and know why, you can adjust your strategy. You might learn that making calls at lunch time isn't effective because no one answers the phone. You might learn you've been targeting the wrong demographic and need to pick different prospects. You might learn prospecting on the weekdays isn't as effective as prospecting on weekends. 


You should be worried about cloud squatting

The core issue is that cloud asset deletions often occur without removing associated records, which can create security risks for subdomains. Failure to also delete records allows attackers to exploit subdomains by creating unauthorized phishing or malware sites. This is called cloud squatting. Resources are provisioned and deallocated programmatically, typically. Allocating assets such as virtual servers and storage space is quick, generally done in seconds, but deallocation is more complex, and that’s where the screwups occur. ... To mitigate this risk, the security teams design internal tools to comb through company domains and identify subdomains pointing to cloud provider IP ranges. These tools check the validity of IP records assigned to the company’s assets. These are assigned automatically by cloud providers. I always get nervous when companies create and deploy their own security tools, considering that they may create a vulnerability. Mitigating cloud squatting is not just about creating new tools. Organizations can also use reserved IP addresses. This means transferring their owned IP addresses to the cloud, then maintaining and deleting stale records, and using DNS names systemically.


Great business partners drive great business performance

A core part of the finance team is also risk management and the ability to say “No”. Too often finance teams say “No”. My boss says that a CFO is many times a CF-No. There are two aspects here. To use football parlance, a CFO has not just to keep score but to score goals. This means that finance teams have to enable risk-taking. No risk, no gain. Capital allocation and building resilience to take measured risks is a critical function of the CFO. In a VUCA world, understanding the risks is a critical imperative. The Covid pandemic and the Ukraine war have resulted in significant supply chain risks. The rapid pace of digitisation, AI, and other developments threaten business models. Making sense of the developments and allowing strategic choices to develop, capital allocation to be done and monies invested is now engaging CFOs significantly. The CFO, at times, has to be a CF-No. Exercising this has to be done very carefully. Done too often, the finance teams become a blocker. But if the teams have done enough to provide insights, engage well with the business, and develop trust – saying a “No” is accepted as sage advice by the business. Getting this right is an art. But this is something that finance teams now have to constantly work on.


How the new Instegogram threat creates liability for organizations

Under Section 230 of the Communications Decency Act (CDA), companies that offer web-hosting services are typically shielded from liability for most content that customers or malicious users place on the websites they host. However, such protection may cease if the website controls the information content. A company that uses a social media network to create the picture or develop information would arguably control that information and thus may not be immune. That is, if a service provider is "responsible, in whole or in part, for the creation or development of the offending content," its actions could fall outside the CDA's protections. Whether the CDA protections extend to damage caused by malware is still largely an open question of law. Companies could therefore be liable for third-party damage resulting from an Instegogram attack, even if they did not know the digital image was infected. As no statutory immunities exist to shield social media users, a company could be liable for any resulting damage caused by a criminal hacker's embedded command-and-control infrastructure. 


The only CIO resolution that matters

CIOs need to own, or at the very least contribute substantively to, the overarching narrative regarding IT’s business context — what is going on, what has gone right, what has gone wrong, which technology developments require action, and so on. Ideally, the office of the CIO would brief the enterprise on a systemic basis on these matters. I am thinking of something similar to the President’s Daily Brief. Four critical decisions need to be made to establish such a brief: what form should the briefing to take, what subject areas should we keep an eye on, which constituencies need to be briefed, and what time frame should we use. Once those decisions have been made, a systemic program of insight capture must also be instituted for the briefings to be effective. Such a learning process — observe, orient — can’t be left to chance. The CIO could assign an individual or set of deputies responsible for enumerating and sharing targeted insights to critical constituencies on a daily, weekly, or monthly basis. This knowledge wrangling — and ignorance vanquishing — operation could work at a departmental or group level and rotate around the staff on an episodic basis.


A lifecycle is a thing that exists from beginning to end. A product lifecycle lasts from inception to decommission. A production lifecycle is a regular way of producing products. A business lifecycle may go from order to ship. A lifecycle is often not even a thing itself, but just the process a thing goes through. I am aging each year. ... Architects build things. We are creative professionals. We make cathedrals. We make solutions. We make better business outcomes in particular ways. I have never seen a group of people who so deeply care about the products they create. Delivery of an outcome based on a business and technology strategy is 90% of architects jobs in practice. This means we touch a LOT of lifecycles. And that is why we must be so very good at navigating them. At discussing pros/cons without religious belief, even without emotion. Building beautiful things is its own reward. How we get there is part of the price we pay to do it. The actual architecture lifecycle then is the method used by ALL of the architects in the practice to deliver value to the organization! But beware, this means all of them. You can’t put business architects in one stack and solution in another and enterprise in a third.


The Elusive Quest for DevSecOps Collaboration

While the concept of DevSecOps has been discussed for years as a best practice for integrating security into development lifecycles, actual adoption has been gradual at best. As Or Shoshani, CEO of cloud security provider Stream Security, explains, "In most of the organizations that we have been working with and exposed to, the SecOps and DevOps are still being separated into two different groups." The reality is that despite widespread consensus on the need for closer collaboration between security and development teams, real-world progress has lagged. Shoshani attributes this to the constant tension between an exciting vision and on-the-ground implementation realities. Just as with past innovations like multi-cloud, he notes, "Everybody talks about it, but the industry isn't ready." Systemic culture shifts take patience. What's behind this lagging evolution? Incumbent challenges around processes, mindsets, and communication persist. Groups accustomed to working in silos and throwing issues "over the wall" resist new rhythms. Security teams are trying to validate each release phase before the next begins trip up accelerated development timetables. And without air cover from leadership, there's little incentive to try.


Building A Secure Foundation: Embracing Best Practices For Coding

Not sure where to start when investing in secure coding practices? Begin with these tips: Organizations should provide developers with comprehensive training on secure coding practices. This training should cover topics such as common vulnerabilities, mitigation techniques and security tools; Organizations should invest in static analysis tools to help developers identify and address vulnerabilities in their code. These tools can automate the process of detecting security flaws, saving developers time and effort; Organizations should create a culture of security awareness within the development team. This culture should encourage open communication about security concerns and promote a shared responsibility for building secure software applications; Developers should stay up to date on the latest security threats and vulnerabilities, which can be achieved by reading security blogs, attending conferences and participating in online forums; Developers should utilize security tools and resources to identify and address potential security flaws in their code. These tools can include static analysis tools, code review tools and security libraries.


From Compliance-First to Risk-First: Why Companies Need a Culture Shift

A paradigm shift is undеrway as businеssеs еvolvе – transitioning from a traditional "Compliancе-First" approach to a more dynamic and forward-thinking "Risk-First" mindset. This cultural shift rеcognizеs that compliancе, whilе еssеntial, should not bе viеwеd in isolation but as an intеgral componеnt of a broadеr risk managеmеnt strategy. This еvolution is not mеrеly a concеptual adjustmеnt but a pragmatic nеcеssity, as organizations sееk to proactivеly idеntify, undеrstand, and mitigatе risks, еnhancing thеir rеsiliеncе and adaptability in an еvеr-changing businеss еnvironmеnt. This еxamination divеs into the importance of companies adopting a cultural transformation. This shift involves shifting from a narrow еmphasis solely on compliancе to a broad and morе stratеgic еmbracе of risk. Bеyond mеrе obligation, this shift fostеrs a culturе that mееts rеgulatory rеquirеmеnts and positions organizations to thrivе amidst uncеrtainty, bolstеring thеir long-tеrm sustainability as wе еxplorе thе complеxitiеs of this changе, wе uncovеr thе fundamеntal connеction bеtwееn compliancе and risk.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - December 25, 2023

Technical Debt is Killing Your Business: How a PLM Strategy Helps

Many organizations implicitly tolerate technical debt as a necessary investment to adapt to changing circumstances or swiftly seizing new opportunities. Successful businesses stress the importance of managing technical debt through acceptance, measurement and proactive strategies, including the adoption of open standards, abstraction and incremental changes. ... Defining and adopting an effective PLM strategy is instrumental in managing technical debt comprehensively. A 2020 McKinsey study titled “Tech Debt: Reclaiming Tech Equity” highlighted the importance of strategic alignment, stating that, “A degree of technical debt is an unavoidable cost of doing business, and it needs to be managed appropriately to ensure an organization’s long-term viability." Furthermore, the study emphasized that “the goal is not to reach zero technical debt. That would involve devoting all resources to remediation rather than building points of competitive differentiation. It would also make it difficult to expedite IT development when strategic or risk considerations require it. Rather, companies should work to size, value and control their technical debt and regularly communicate it to the business.”


Improving the case for waste from data centers

The challenge originally stems from the practical complexities of collecting and harnessing residual heat from data centers. Planning authorities actively encourage heat reclamation, but the lack of existing infrastructure poses a significant obstacle. While planning conditions that mandate developers to allow for connections to ‘future’ heating networks is a positive move, this becomes futile where there is no corresponding plan for heat network development. Developers comply with the condition out of an obligation to meet regulatory requirements rather than in genuine expectation of the infrastructure ever being used. From the perspective of data center operators, investing in the infrastructure only makes sense when it generates Operational Expenditure (OpEx) savings through the reduced power and water consumption. However, the misalignment in load profiles complicates this matter. As the heating network’s demands peak in winter whilst reducing in summer, the data center operates the opposite way, as it can take advantage of ‘free cooling’ during the colder months. This misalignment in load profiles also impacts the ESCos.


The rise of observability and why it matters to your business

Automation is a two-edged sword. It’s one of those alluring concepts, but there’s real caution around trusting machines to judge what actions should and shouldn’t be taken and when. So given the sensitive nature of change management, we would expect this trend to continue to lean toward AI-led automation, but it will take some time before humans are mostly out of the loop. Moreover, while many vendors claim to have AI, there’s a wide spectrum of capabilities, and customers should be very cautious about vendor claims in this regard. Now, not surprisingly, the regulated industries of financial services, healthcare and government see a much lower tendency to be mostly-AI led in this context over the next year (well under 5% say mostly AI-led in this chart), whereas industries such as energy and high tech are much more likely to adopt AI aggressively in this space. Interestingly, the data show that senior managements are more likely to push for AI adoption whereas the practitioners, who literally have their jobs on the line – that is, machines replacing humans or getting fired for implementing rogue automation – are much less optimistic.


Innovate to elevate: Blueprint for business excellence in 2024 and beyond

The upcoming year promises an exciting development in the form of GenAI, which will be integrated into everyday applications such as search engines, office software, design tools, and communication platforms. This integration will reveal its full potential as a super-smart hyper-automation engine. With the ability to take over routine tasks, including information retrieval, scheduling, compliance management, and project organization, individuals will be able to boost their productivity and efficiency. As per a report, hyper-automation, combined with other technologies, can automate work activities that currently occupy 60-70% of employees’ time by 2024. This development offers immense value to sectors such as software engineering, R&D, customer operations, marketing, and sales, making it an indispensable part of the IT industry. In this rapidly evolving world, organizations are constantly searching for ways to enhance customer service and drive growth. One of the most promising ways to achieve this is by embracing hyper-automation technologies such as AI-powered tools, Natural Language Processing (NLP), chatbots, and virtual assistants. 


4 ways robotics, AI will transform industry in 2024

The future of manufacturing is intricately linked to IT/OT integration as data will underpin innovation and efficiency. Research shows that the manufacturing industry has been at the forefront of adopting cloud-based software services and we are already seeing some customers use these to enhance quality, cost efficiency, and predictability. That makes me confident that 2024 will see the growth of data-driven logistics and manufacturing systems. Many still have an outdated view of the cloud as merely being a data collector and backup function, as we know it from our private lives. But the real potential and power don’t lie in storing data or even in linking machines. The real transformative leap comes when cloud-based software services connect humans and machines and help manufacturers simplify complex processes and make smarter decisions. The benefits of this digital evolution are significant. Remote access to manufacturing data enables quick responses to issues and continuous automation improvement. With dynamic systems now essential, trusted cloud technologies offer the latest in security and state-of-the-art services.


Proper Data Management Drives Business Success

Organizations across industries are excited about generative artificial intelligence (AI) and large language models (LLMs), and for good reason. Tools like Chat GPT-4 have the potential to transform business and revolutionize how employees do their jobs, so it’s no surprise that many people are enthusiastic about implementing them within their organizations. However, LLMs are only as good as the data on which they are trained. If an organization’s data isn’t properly sorted, tagged, and secured, the addition of LLMs will not be nearly as transformative as business leaders hope. Nearly half (45%) of IT leaders admitted that ineffective and inefficient Data Management means they can’t leverage emerging technology such as generative AI, which can put them at a competitive disadvantage. IT leaders must holistically assess the state of their data practices before implementing generative AI. Only 13% of respondents reported that Data Management initiatives are their number one priority, so it’s unsurprising that 77% of the average U.S. company’s data is redundant, obsolete, or trivial (ROT) or dark data. 


Understanding the NSA’s latest guidance on managing OSS and SBOMs

In an effort to provide context and prioritization to downstream product and software consumers, the guidance recommends suppliers and developers adopt Vulnerability Exploitability eXchange (VEX) documents to help consumers and customers know which components are actually impacted by a vulnerability, which have been resolved, and what should potentially be addressed via compensating controls. The NSA also recommends suppliers and vendors adopt attestation processes to demonstrate the secure development of a product throughout the building, scanning, and packaging of product development and distribution. This of course is being led by industry efforts such as in-toto and SSDF and self-attestations when machine-readable artifacts are not generated and used. This helps provide assurance of not just the components of an end product but the security of the development process as well. To address vulnerabilities the NSA recommends using not just CVE and NVD but also other vulnerability databases such as OSV as well as vulnerability intelligence sources such as the CISA Known Exploited Vulnerability (KEV) catalog and Exploit Prediction Scoring System (EPSS).


5 Common Data Science Challenges and Effective Solutions

The upskilling and reskilling of existing data science experts aren’t limited to technical skills. Data science experts also need enhanced problem-solving and communication skills. With the massive amount of data now available come new challenges and problems that need to be addressed. The solutions to these problems need to be properly communicated to team members and management, who may or may not have the expertise to interpret data on their own. We’ll explore this in more detail later. To address the challenge of a smaller pool of data scientists relative to demand, you just need to stand out as a potential employer and attract some of those professionals who are part of that pool. So, offer competitive salaries and benefits. The average base pay for data scientists in the US is $146,422, according to Glassdoor, and if you can offer more, better. Whether you hire data scientists or already have data professionals as employees, you need to invest in data science workshops and training. These can help ensure your team’s data science skills are attuned to the times and consider current practices and standards in the data science industry.


How Observability Strengthens Your Company Culture

Observability breaks down silos and makes collaboration easier across different clouds, databases, and dashboards seamlessly. For example, an issue that the DevOps team discovers through observability might lead them to collaborate with the design team in a way they may never have before. Leaders should aim to do the same for their teams by fostering greater collaboration across the entire organization. A lack of effective collaboration and communication is the top cause of workplace failures, according to 86 percent of employees and executives. Just as observability is a step up from monitoring, collaboration is the output that evolves from transparent communication. Your head of accounting probably knows precisely where each decimal point needs to be within a spreadsheet and why it needs to be there. Can they say the same about the IT team’s technology stack or the sales team’s go-to-market plan? With a culture underpinned by collaboration, employees won’t just learn how to get along. They’ll understand why each cog in your machine functions the way it does, as well as the effect of their work on their fellow employees, the end product, and the business as a whole.


The Third-Party Threat for Financial Organisations

DORA requires financial entities to have robust contracts in place with ICT service providers. Financial organisations must also maintain a register of service providers and report on this to the competent authority every year. The key here is to manage risks. This includes managing the risk of having too many critical or important functions supported by a small number of service providers. In addition, DORA requires that financial entities only contract with providers that “comply with appropriate information security standards”. Where the ICT service provider supports critical or important functions, the financial entity must ensure the standards are “the most up-to-date and highest quality”. ... Unlike the GDPR (General Data Protection Regulation), DORA does not require that these standards be identified by a specific authority, so it’s reasonable to assume that ISO 27001 – since it sets the international benchmark for information security management – would qualify as such a standard. As Alan mentioned, certifications like ISO 22301 and Europrivacy™/® add further assurance, as do due diligence checks on suppliers’ resilience, particularly for critical suppliers.



Quote for the day:

"Innovation is taking two things that already exist and putting them together in a new way." -- Tom Freston

Daily Tech Digest - December 24, 2023

The emerging role of the chief resilience officer in BCDR

Chief resilience officer is a relatively new senior-level executive title and is still evolving. Responsibilities can include business continuity and disaster recovery (BCDR), incident response, cybersecurity, and risk management. The chief resilience officer might also be designated as the lead executive for crisis management activities. Chief resilience officers must ensure the organization can adapt and improve its operations so that future disruptive events are more effectively mitigated, resulting in minimal damage to the organization and its reputation. ... Preparing for and responding to disruptive events traditionally has been managed by a wide variety of job titles in an organization. Sometimes the role is part of the IT staff or disaster recovery team. Other times it can be part of administration, risk management, emergency management, human resources or facilities management. In medium to large organizations, the need for a central leadership role for these and related activities has become evident. ... Establishing a chief resilience officer reinforces the importance of BCDR activities across the entire organization.


Global securities body releases DeFi recommendations: Finance Redefined

Following its release, some community members worried about how it could “kill” DeFi, while others said it would not have a fatal effect. Apart from IOSCO’s move, China’s central bank also urged jurisdictions across the globe to regulate the DeFi space jointly. Meanwhile, the DeFi ecosystem flourished in the past week thanks to ongoing bullish market momentum, with most tokens trading in green on the weekly charts. IOSCO published nine recommendations for DeFi. The organization encourages consistency when it comes to regulatory oversight across jurisdictions worldwide. The new recommendations were a companion to the digital asset and crypto recommendations released in November. Furthermore, IOSCO released a note on how the two sets of recommendations can work hand in hand depending on the level of decentralization of regulated entities. ... Apart from IOSCO, the People’s Bank of China (PBoC) also pushed for joint DeFi regulation in its latest financial stability report. The central bank allotted a section to crypto assets in its report, underscoring the need for the industry to be regulated with joint efforts from various jurisdictions.


Unleashing power of language models in India’s IT landscape: The talking network revolution

The cornerstone of this transformative paradigm is the ability of language models to comprehend, analyze, and respond to user queries with human-like understanding. India, with its vast and diverse linguistic landscape, stands to benefit immensely from language models that can comprehend and respond in multiple languages. This linguistic versatility ensures that the Talking Network caters to the linguistic diversity of the Indian corporate environment, making it an inclusive and accessible solution for businesses across the country. One of the pivotal trends catalyzed by GAI and LLMs in India is the development of proactive and predictive IT maintenance tools. AIM Research predicts that, by 2024, 40 per cent of enterprise applications will embed conversational AI as a standard feature. Traditionally, IT maintenance has been a reactive process, addressing issues only when they arise. However, the Talking Network introduces a proactive dimension by leveraging predictive analytics and machine learning capabilities embedded in these language models. By analyzing historical data and identifying patterns, the network can foresee potential glitches and address them before they escalate into major disruptions.


Why Bill Gates Says AI Will Supercharge Medical Innovations

He cites an AI-powered tool under development at the Aurum Institute in Ghana that helps health workers prescribe antibiotics without contributing to antimicrobial resistance, where pathogens learn how to get past antibiotic defenses. The tool can comb through all the available information about antimicrobial resistance and suggest the best drug plan for a patient. ... Gates also sees AI assisting in education, calling AI education tools "mindblowing," as they are tailored to individual learners, and says they will "only get better." He's excited about how the technology can be localized to students in many different countries and cultural contexts. Not everything on Gates' mind is AI-related. He's concerned about climate change, saying he's "blown away by the passion from young climate activists," and hopeful that 2024 will see more investment in innovations that will help those who are most affected by the climate crisis. And he even plunges into the debate over nuclear energy. Gates notes that high-profile disasters such as Chernobyl in the 1980s and Three Mile Island in the late 1970s have spotlighted the risks, but over the past year, he's seen a shift towards acceptance.


How AI Is Transforming Industries

A critical sector of the Indian economy, agriculture contributes to 18 per cent of the GDP. And several new-age start-ups are emerging in this segment with the likes of CropIn, DeHaat, BharatAgri, and Bijak. These start-ups are helping develop innovative solutions for various aspects of agriculture, especially precision farming, supply chain management, and market linkages The 2019 start-up Fyllo presently has over 100 agronomy models to help farmers produce over 20 crops. It provides insights on growing crops based on climate or occurrences of diseases/pests on crops. Fyllo believes that both problems can be solved with accurate data, which led to them building a number and pathogens prediction models using AI now. "We use AI for multiple use cases at Fyllo. The first use case is predicting the weather. We use climate data from our devices and machine learning-based weather Models to come up with a highly precise farm-level weather prediction model. Another use case is getting crop health, and crop stage identification from satellite imagery. We use various machine learning models to do that. 


Magnetic Knots Push Future Computing Toward 3D

“In the last decades, electronics basically developed in the paradigm of two-dimensional systems,” says Nikolai Kiselev, a staff scientist at the Peter Grünberg Institute in Jülich, Germany. “Which from a certain point of view is absolutely reasonable because technologically it’s much easier to fabricate and maintain such devices. But if we look toward the future, most probably to make our devices the most efficient, at some point, we will have to turn towards a three-dimensional architecture. And that’s where the discovery we made in our paper might become useful. ... Although hopfions move around readily, other aspects of their computing potential is still uncertain. The team used transmission electron microscopy to image the hopfion, and measuring its location more efficiently is an outstanding problem. The team says they plan to look at how these objects respond to electric current, which could help detect and track them. Plus, precise details on the exact ways hopfions might encode information is still an open question. That said, Kiselev adds, many questions like this don’t yet have answers because there has been no reason to ask them. 


The Art Of Listening: Silent Communication In Leadership

Silence, first and foremost, is a medium of introspection and reflection. Leaders, constantly barraged by information and demands, may find themselves lost in a maze of noise—both external and internal. Silence offers a sanctuary, a space to step back and reflect. It allows leaders to process information, contemplate decisions, and align their actions with their core values and objectives. This introspective silence is not merely an absence of noise; it’s an active engagement with one’s thoughts, a deliberate pause to understand the bigger picture. Moreover, silence can be a powerful communication tool. It’s not just about the absence of speech; it’s about listening, understanding, and absorbing. ... Silence also plays a crucial role in conflict resolution and negotiation. In tense situations, a leader’s silence can de-escalate emotions and give everyone a moment to breathe and reassess. By not immediately responding to a provocation or a challenging statement, leaders can avoid knee-jerk reactions that might exacerbate the conflict. Instead, silence can be used to control the tempo of the conversation, allowing for thoughtful and measured responses that are more likely to lead to constructive outcomes.


2024 in laptops: it’s shaping up to be a big year for Windows

It’s the AI coprocessor inside that’s intriguing to me, particularly because Intel and Microsoft have both been dropping hints about a future version of Windows arriving soon and how “AI is going to reinvent how you do everything on Windows.” Rumors suggest that Windows 12 will include a large focus on AI and take advantage of the AI coprocessors that Intel is building into its Core Ultra chips. (Intel isn’t the only one: AMD also has its own Ryzen 7000 mobile processors that include a dedicated AI engine, and these types of neural processing unit (NPU) chips are common on Arm-powered Windows laptops.) Intel held an AI event to launch its Core Ultra chips this month, just ahead of the annual Consumer Electronics Show (CES), where we’ll see all of the new laptops that are powered by Intel’s new chips. Lenovo, MSI, Acer, and Asus are all launching laptops with these new chips inside. While Intel talked a lot about “AI everywhere,” the missing piece of the puzzle, a new AI-focused version of Windows, is still a mystery right now.


Chips To Compute With Encrypted Data Are Coming

At first glance, it might seem impossible to do meaningful computation on data that looks like gibberish. But the idea goes back decades, and was finally made possible in 2009 by Craig Gentry, then a Stanford graduate student. Gentry found a way to do both addition and multiplication without calculation-killing noise accumulating, making it possible to do any form of encrypted computation. One comparison you can use to understand FHE is that it’s analogous to a Fourier transform. For those of you who don’t remember your college signal processing, a Fourier transform is a mathematical tool that turns a signal in time, such as the oscillation of voltage in a circuit, into a signal in frequency. One of the key side effects is that any math you can do in the time domain has its equivalent in the frequency domain. So you can compute in either time or frequency and come up with the same answer. The genius of fully homomorphic encryption is that it uses lattice cryptography— a form of quantum-computer-proof encoding—as the mathematical transformation. The problem with this approach is that the transformation leads to a big change in the type and amount of data and in the sorts of operations needed to compute. That’s where the new chips come in.


Ransomware Attackers Abuse Multiple Windows CLFS Driver Zero-Days

CLFS is a high-performance, general-purpose logging system available for user- or kernel-mode software clients. Its kernel access makes it eminently useful for hackers seeking low-level system privileges, and its performance-oriented design has left a series of security holes in its wake in recent years, which ransomware actors in particular have pounced on. ... Nothing in particular changed about the CLFS driver this year. Rather, attackers seem to have just now identified what was wrong with it this whole time: It leans too far left in that inescapable, eternal balance between performance and security. "CLFS is perhaps way too 'optimized for performance,'" Larin wrote, detailing all of the various ways the driver prioritizes it over protection. "It would be better to have a reasonable file format instead of a dump of kernel structures written to a file. All the work with these kernel structures (with pointers) happens right there in the blocks read from disk. Because changes are made to the blocks and kernel structures stored there, and those changes need to be flushed to disk, the code parses the blocks over and over again every time it needs to access something."



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - December 23, 2023

How LLMs made their way into the modern data stack in 2023

Beyond helping teams generate insights and answers from their data through text inputs, LLMs are also handling traditionally manual data management and the data efforts crucial to building a robust AI product. In May, Intelligent Data Management Cloud (IDMC) provider Informatica debuted Claire GPT, a multi-LLM-based conversational AI tool that allows users to discover, interact with and manage their IDMC data assets with natural language inputs. It handles multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation. Then, to help teams build AI offerings, California-based Refuel AI provides a purpose-built large language model that helps with data labeling and enrichment tasks. A paper published in October 2023 also shows that LLMs can do a good job at removing noise from datasets, which is also a crucial step in building robust AI. Other areas in data engineering where LLMs can come into play are data integration and orchestration. 


Corporate governance in 2023: a year in review

2023 has seen a continuing trend of more responsibilities for directors. Often, this responsibility comes from regulators; sometimes, it comes from investors or other stakeholders. One thing is certain, though: directors are rapidly losing any remaining wiggle room to be “rubber-stamp” individuals. Modern board roles carry serious accountability; many directors are starting to appreciate that and adhere to new standards. The trouble is sometimes the new standard overstretch the director – so much so that we now have concerns about overboarding, exhaustion, and undue stress. How will that play out if the trend of more responsibility continues? ... The board dismissed the evidently popular CEO Sam Altman in a decision made behind closed doors with utmost secrecy. And as the world’s attention predictably turned their way, they could give no answers. Soon, Altman was rehired after around 70% of the company’s staff threatened to resign and join Microsoft (a significant OpenAI investor). The board subsequently agreed to undergo a major reshuffle for more accountability and transparent decision-making.


Quantum Computing’s Hard, Cold Reality Check

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACM suggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe. “We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.” The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science. Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. 


Navigating the Data Landscape: The Crucial Role of Data Governance in Today’s Business Environment

Data quality management has become increasingly paramount as the volume of data exponentially raises day by day. Organizations can protect their data with policies and procedures, ensure that they follow all the rules and regulations, hire folks that understand the data you are collecting and what it means to their company but if that data isn’t high quality your organization may get the short end of the stick. Maybe you’re three weeks late for a TikTok trend or you miss out on a whole subset of customers because of the misstep with your collection methods, either way that profit loss and a chance to build on that data point in the future could be a pivotal misstep. Ensuring that your organization has processes to monitor and improve your data quality on a continuous basis will save your organization time and money in the long run. Despite its importance, implementing effective data governance comes with challenges. Organizations often face resistance to change, cultural barriers, and the complexity of managing diverse data sources.


Choosing Between Message Queues and Event Streams

There are numerous distinctions between technologies that allow you to implement event streaming and those that you can use for message queueing. To highlight them, I will compare Apache Kafka and RabbitMQ. I’ve chosen Kafka and RabbitMQ specifically because they are popular, widely used solutions providing rich capabilities that have been extensively battle-tested in production environments. ... Message queueing and event streaming can both be used in scenarios requiring decoupled, asynchronous communication between different parts of a system. For instance, in microservices architectures, both can power low-latency messaging between various components. However, going beyond messaging, event streaming and message queueing have distinct strengths and are best suited to different use cases. ... Message queueing is a good choice for many messaging use cases. It’s also an appealing proposition if you’re early in your event-driven journey; that’s because message queueing technologies are generally easier to deploy and manage than event streaming solutions. 


5G and edge computing: What they are and why you should care

Instead of relying solely on large, high-powered cell towers (as 4G does), 5G will run off both those towers and a ton of small cell sites that can be clustered together. This is how 5G achieves its population density. 5G is also supposed to be more energy efficient. As such, the communications component of IoT devices won't drain as much power, resulting in longer battery life for connected devices. There's also a ton of AI and machine learning in 5G implementations. 5G nodes and interface devices deployed on the edge, away from central hubs. They utilize AI and machine learning to analyze communications performance, and use AI to bandwidth-shape communications, to wring as much performance out of the hardware as possible. You're familiar with the term "cloud computing." We've all used cloud services, services that run on a server someplace rather than on our desktop computers or mobile devices. The cloud, of course, isn't really a cloud. Amazon, Google, Facebook, Microsoft, and others operate massive data centers packed with thousands upon thousands of servers. Soft and fluffy, the cloud is not.


Stolen Booking.com Credentials Fuel Social Engineering Scams

Social engineering expert Sharon Conheady said this type of trickery remains extremely difficult to repel, because of the customer-first nature of hospitality. Many public-facing people in such organizations, such as receptionists, are "trained to help people - that's their job," and of course they're going to bend over backwards to try to meet apparent customers' demands, Conheady said in an interview at this month's Black Hat Europe conference in London. Help desks remain another frequent target. "I had a client lately who asked me to call the help desk and obtain BitLocker keys," she said, referring to a recent penetration test. "Every single one of the help desk agents gave us the BitLocker key." That prompted her to ask: Do these personnel even know what a BitLocker key is, and why they shouldn't share it? The client said they didn't know. While training people in customer-facing roles can help, Conheady said the only truly effective approach would be to put in place strong technical controls to outright prevent and block such attacks.


Significantly Improving Security Posture: A CMMI Case Study

“Phoenix Defense has led the way in adopting CMMI Security best practices for nearly two decades, and now included the Security best practices,” says Kris Puthucode, Certified CMMI High Maturity Lead Appraiser at Software Quality Center LLC. “This adoption has yielded quantifiable benefits, enhancing security posture across Mission, Personnel, Physical, Process, and Cybersecurity domains. Additionally, incorporating Virtual work best practices has standardized virtual meetings and events, boosting efficiency.” Phoenix Defense has been a CMMI Performance Solutions Organization since 2005, first achieving Maturity Level 5 in 2020. ... Before adopting CMMI Security and Managing Security Threats and Vulnerabilities Practice Areas in the model, Phoenix Defense had a closed network with no outward-facing applications and relied on a third-party vendor to monitor threats and spam. They did not fully, quantitively track attacks against the networks or other data flows, and they required a more robust approach to properly ensure network security.


5 common data security pitfalls — and how to avoid them

While regulations like GDPR and SOX set standards for data security, they are merely starting points and should be considered table stakes for protecting data. Compliance should not be mistaken for complete data security, as robust security involves going beyond compliance checks. The fact is that many large data breaches have occurred in organizations that were fully compliant on paper. Moving beyond compliance requires actively identifying and mitigating risks rather than just ticking boxes during audits. ... Data is one of the most valuable assets for any organization. And yet, the question, “Who owns the data?” often leads to ambiguity within organizations. Clear delineation of data ownership and responsibility is crucial for effective data governance. Each team or employee must understand their role in protecting data to create a culture of security. ... Unpatched vulnerabilities are one of the easiest targets for cyber criminals. This means that organizations face significant risks when they can’t address public vulnerabilities quickly. Despite the availability of patches, many enterprises delay deployment for various reasons, which leaves sensitive data vulnerable.


Outmaneuvering AI: Cultivating Skills That Make Algorithms Scratch Their Head

Reasoning, the intellectual ninja of skills, is all about slicing through misinformation, assumptions, and biases to get to the heart of the matter. It’s not just drawing conclusions, but thinking about how we do that. This skill is the brain’s bouncer, keeping cognitive fallacies and hasty generalizations at bay. We humans, bless our hearts, are prone to jumping on the bandwagon or seeing patterns where there are none (like seeing a face on Mars or believing in hot streaks at Vegas). These mental shortcuts, or heuristics, can lead us astray, making reasoning not just useful but essential. AI is trained on our past reasoning reflected in old works. But it can’t reason on its own — at least not yet. Consider a business deciding whether to invest in a new technology. Without proper reasoning, they might follow the hype (everyone else is doing it!) or rely on gut feelings (it just feels right!). But with reasoning, they dissect the decision, weigh the evidence, consider alternatives, and make a choice that’s not just good on paper, but good in reality.



Quote for the day:

"Whether you think you can or you think you can’t, you’re right." -- Henry Ford