Daily Tech Digest - August 05, 2023

ESG & Climate Risk Management: Integrating Environmental Data In Ratings

Integrating environmental data into investment strategies allows investors to identify companies proactively addressing environmental challenges. It provides a comprehensive picture of a company's sustainability practices, particularly relevant in industries susceptible to climate risks, such as energy, agriculture, and transportation. By considering environmental data, investors can assess a company's resilience to climate change, regulatory changes, and physical risks such as extreme weather events. They can also evaluate how well a company aligns with global sustainability goals, such as the United Nations Sustainable Development Goals (SDGs). Moreover, environmental data integration helps investors identify companies capitalizing on emerging opportunities in the transition to a low-carbon economy. Companies with innovative solutions, efficient resource management, and clean technologies will likely thrive in a future characterized by climate-conscious policies and consumer preferences.


How New Tech Elevates Release Management’s Quality Standards

ML and AI technology can help with intelligently choosing when to deploy and roll back a release. In order to recommend deployment tactics, machine learning (ML) models can learn from previous deployment experiences, including success rates, user input and mistake patterns. In order to ensure a more dependable and error-resistant deployment process, AI algorithms can track the deployment process, evaluate the system’s health and initiate automated rollbacks if anomalies or severe concerns are discovered. In general, ML and AI provide valuable capabilities to release management, boosting testing and quality assurance, streamlining release planning, enabling continuous monitoring, analyzing release risks and simplifying wise deployment decisions. Utilizing these technologies enables businesses to improve software quality, streamline release management procedures, and deliver high-performing applications more effectively and reliably. 


Unleash the Power of Open Source: The SONiC Revolution

As businesses embrace connectivity from core to cloud to edge, network management becomes more complex. Open source networking, specifically SONiC, simplifies network management by providing a unified fabric across the entire network ecosystem. SONiC enables organizations to manage and control their networks seamlessly, regardless of the deployment location. This simplification allows for a more consistent and streamlined network management experience, reducing operational complexities and enhancing efficiency. In addition, SONiC is built on standards-based open APIs, providing organizations numerous management platform options to choose from, which is particularly useful for those already invested in other DevOps, Linux-based, or open source solutions. ... Emerging technologies like AI/ML, 5G, and the data boom at the edge are transforming business operations and service delivery. These technologies require a modern infrastructure built on containerized architecture, Ansible automation, and predictive AI/ML monitoring solutions. SONiC easily enables the adoption of these technologies, providing the foundation for next-generation networking.


The Art of Reducing 'Technology Debt' & Winning Digital Transformation Race

Almost every organization is undergoing digital transformation hence a noticeable surge in investments toward cloud automation, cloud computing, and cybersecurity services. However, in this scenario, not only the pace at which organizations should attempt transformation but also about choices they make during this journey is equally important. All transformations are not mandatory. In the IT industry, there is a term called 'technology debt,' which basically means that you are at a specific juncture because of decisions you took years ago. Technical debt is similar to dark matter; you can deduce its impact but cannot see or measure it. So for any company deliberating on large-scale digital transformation, it must ensure to partner with the right tech partners/vendors to get the right perspective on talent and capability. To strike a balance between reaping the benefits of transformation and ensuring security, companies must adopt a multi-faceted approach that can be led by experts effectively. Companies should thoroughly assess potential partners' cybersecurity measures, track records, and commitment to data protection.


Spare Some Change: Emotional Debt, Technical Debt, & Preparing for Change

The tricky thing about emotional debt, in particular, is that the collectors are unpredictable — you can defer payments for months or even years without a single letter or call, then you run into a health scare, an unexpected bill, or just an especially bad day, and all of these feelings that you’ve been deferring come crashing down on you all at once. And it’s important to recognize this emotional debt translates into workplace well-being; a recent Deloitte study indicates that many employees are struggling with unacceptably low levels of well-being. ... I had one of these moments recently. I ran headlong into a mental and emotional wall that I didn’t see coming at all, and at first, it made every part of life overwhelming. These are the moments when understanding and applying a strategic planning framework like I provided in my book, Building the Business of You, provides much-needed structure. I’ve never tackled some of the challenges I’m dealing with in my personal life before, but I have been a manager, which means I’ve helped create strategies to help to manage change and facilitate progress, so I know I can do that.


How to account for hidden costs of digital product development and offset them

Once your digital product is developed and deployed, the journey is far from over. Ongoing software maintenance and troubleshooting are crucial to ensuring your product's stability, security, and optimal performance. However, these aspects can bring forth hidden costs that may catch you off guard if not managed strategically. Software maintenance involves routine updates, bug fixes, security patches, and performance enhancements. Failing to allocate adequate resources to these tasks can lead to a deterioration of your product's functionality and user experience. Additionally, the need for troubleshooting arises when unforeseen issues or bugs emerge post-launch, demanding swift resolutions to avoid disruptions and customer dissatisfaction. ... Outsourcing grants you the flexibility to scale resources based on the current maintenance requirements, saving you from the burden of hiring and training additional staff. You can easily add or reduce resources as needed, ensuring cost-effectiveness and optimized productivity.


India’s Data Privacy Bill is Back

The Bill itself proposes data protection legislation that allows transfer and storage of personal data in some countries while raising the penalty for violations. It suggests consent before collecting personal data and provides stiff penalties to the tune of Rs.500 crore on those that fail to prevent data breaches. The Bill applies to processing digital personal data within the Indian territory and processing it outside of India if such processing is in connection with any profiling or offering goods or services to data principals within India. However, it does not apply to non-automated processing or processing for domestic or personal purposes by individuals and data contained in records that have been in existence for at least 100 years. ... On the issue of consent, the Bill notes that personal data of an individual can only be processed for a lawful purpose for which the concerned individual has given consent or is deemed to have given her consent. It mentions the consent should be free, specific, informed, and unambiguous.


AI Ethics Teams Lack ‘Support Resources & Authority’

Diplomatically approaching one product team after another in hopes of collaborating only gets ethics workers so far. They need some formal authority to require that problems be addressed, Ali says. “An ethics worker who approaches product teams on an equal footing can simply be ignored,” she says. And if ethics teams are going to implement that authority in the horizontal, nonhierarchical world of the tech industry, there need to be formal bureaucratic structures requiring ethics reviews at the very beginning of the product development process, Ali says. “Bureaucracy can set rules and requirements so that ethics workers don’t have to convince people of the value of their work.” Product teams also need to be incentivized to work with ethics teams, Ali says. “Right now, they are very much incentivized by moving fast, which can be directly counter to slowly, carefully, and responsibly examining the effects of your technology,” Ali says. Some interviewees suggested rewarding teams by giving them “ethics champion” bonuses when a product is made less biased or when the plug is pulled on a product that has a serious problem.


A 4-pronged strategy to cut SaaS sprawl

IT leaders must conduct regular software audits to identify and assess the usage of all SaaS applications across the organization. Kamal Goel, senior VP of IT at Hitachi Systems India, says, “Gather data on software adoption, utilization, and user feedback to determine which tools are genuinely adding value to the organization and which ones are underutilized or redundant. This analysis will help you make informed decisions about which subscriptions to keep, downgrade, or terminate.” At a previous employer, Goel’s IT department conducted a comprehensive audit of all SaaS applications and found that multiple teams were subscribed to a project management tool but most only used a fraction of its features. “By switching to a more focused and cost-effective project management tool and discontinuing the old one, the organization saved significant expenses without compromising productivity,” he says. Oftentimes, teams continue to rely on existing systems and processes while the quality of data and intelligence in the new system suffers resulting in its more advanced features becoming superfluous.


API Standardization and Its Role in Next-gen Networking

APIs are becoming more important because of the way applications are built, and services are delivered. Applications are composable, relying on integrated functional elements from different sources. A simple application for most businesses might need a mobile frontend, a link to a backend database, and a processing engine in between. And many might use data from third parties. Standardized APIs would ensure the elements worked together and developers did not have to start from scratch every time they built a new application. ... “Increasingly next-generation services like Network-as-a-Service (NaaS) solutions will be delivered across a system of many providers, and the networks supporting these services will be fully API-driven,” says Pascal Menezes, CTO, MEF. “For this to happen, standards-based automation is required throughout the entire system where all parties adopt a common, standardized set of APIs at both the business process and operational levels.”



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - August 04, 2023

Cloud may be overpriced compared to on-premises systems

Public cloud computing prices have been creeping up because they are offered by for-profit companies that must generate a profit. Running a public cloud service is costly, and the billions invested over the past 12 years must show investors a return. That’s why prices have been increasing, not to mention the additional value cloud providers can offer, such as integrated AI, finops, operations, etc. At the same time, the cost of producing hardware, such as traditional HDD storage, has dropped to a new level of confusion. Now it’s a viable alternative to cloud-based storage systems. Thus, it’s not just a quick decision to pick cloud computing over traditional hardware now. ... Again, this is about being entirely objective when looking at all potential solutions, including cloud and on-premises. Cost being equal, cloud computing will be the better choice nine times out of 10, but now that the prices are very different, that may not be the case. If you’re the person making these calls, you must consider all aspects of these solutions, including future criteria. 


Leveraging Cloud DevOps to drive digital optimization and maximizing cloud benefits

Organizations need to consider several factors to ensure a successful implementation that delivers desired business value. They must begin by identifying the business drivers and putting in place a change management program. They must train or upskill their employees and focus on the structural and process changes required to foster collaboration between different functions. Companies must define measurable goals and KPIs and establish governance after considering the prevailing technology landscape and the future roadmap. . They must know and assess their existing infrastructure and portfolio, their requirements, the current challenges they face, the right cloud mix that’s appropriate to their needs, and so on. They must plan their resources and identify their security needs to ensure Cloud DevOps can meet these requirements. ... The success of Cloud DevOps can be measured along four primary dimensions – efficiency, agility, reliability, and quality.


Spatial Data Science: The Basics You Need to Know

“Spatial data science is data science on geospatial data -- location data, navigation data, GPS data, any data that is geocoded,” Kobielus explained. “Geospatial data science builds on and extends the capabilities of geographic information systems.” ... When asked about principal use cases for spatial data science, Kobielus suggested several possibilities. “A core and mainstream enterprise application for spatial data science has been address management. Customer information management needs to be integrated with permanent addressing which then is geocoded so that as your customers move around you always know what their actual address is.” Other possible uses include determining optimal locations for things such as retail outlets or manufacturing facilities, optimizing supply chain logistics, tracking inventory, personalizing user experiences on mobile devices, allowing businesses to provide targeted content, and indoor applications to help organizations optimally arrange things within warehouses or other indoor spaces.


Forecasting Team Performance

A useful technique is systems mapping. To do it, you first, identify qualitative factors and situations affecting your organization. By qualitative, I’m referring to what you can't see on a neat dashboard or chart — the things that only come up in casual 1:1 calls with people, or in water cooler conversations and healthy retrospectives. Next, think about second-order effects. These are the consequences of those qualitative factors, which have further implications and so on. Researchers have shown that second-order effects are a big blindside of the human brain; we think in terms of linear cause-and-effect and so often miss significant domino-effect repercussions. One way of making sure you’re sensitive to these second-order effects is to talk to people that have been in the organization for a long time. They're often the people who are most eager to share the types of stories that will help you draw connections between seemingly unrelated situations.


When Do Agile Teams Make Time for Innovation?

The unintended side effects of shorter sprints and sprint commitments can be devastating for creativity and breakthroughs. Teams that feel pressured by time or fear of failure aren't going to feel safe to experiment. In the absence of psychological safety, innovation recedes. It's critical that agile teams push back against this pressure to deliver and never fail. A recent Harvard Business Review article on psychological safety said, "In essence, agile’s core technology isn’t technical or mechanical. It’s cultural." Or as Entrepreneur.com put it, “Your company needs an innovation culture, not an innovation team. You don't become an innovative company by hiring a few people to work on it while everybody else goes through the motions” As agilists, we must fight to establish experimentation as part of our company culture. Companies that value innovation empower self-organizing teams to try new things, encourage (and fund) continuous learning and improvement, solicit and act on feedback and ideas, and emphasize collaboration and communication.


Microsoft attacked over ‘grossly irresponsible’ security practice

Yoran said the so-called shared responsibility model of cyber security espoused by public cloud providers, including Microsoft, was irretrievably broken if a provider fails to notify users of issues as they arise and apply fixes openly. He argued that Microsoft was quick to ask for its users’ trust and confidence, but in return they get “very little transparency and a culture of toxic obfuscation”. “How can a CISO, board of directors or executive team believe that Microsoft will do the right thing given the fact patterns and current behaviours? Microsoft’s track record puts us all at risk. And it’s even worse than we thought,” said Yoran. “Microsoft’s lack of transparency applies to breaches, irresponsible security practices and to vulnerabilities, all of which expose their customers to risks they are deliberately kept in the dark about,” he added. A Microsoft spokesperson said: “We appreciate the collaboration with the security community to responsibly disclose product issues. 


Managing Partnership Misfits

To get the right stakeholders on board and collaborating, project initiators must combine engagement and containment strategies. And to do this, they need more practical and nuanced guidance along with a new set of lenses through which to assess the suitability of potential partners and to identify, motivate, or control misfits. In other words, they need a tool to identify potential fault lines in future partnerships and to help iron out or contain misalignments. Based on in-depth studies of successful and unsuccessful partnerships, we propose a framework that tests partner fit across three dimensions: task-fit (what each party needs); goal-fit (what each party aims to achieve); and relationship-fit (how each party works). How potential partners measure up on these dimensions flags likely misalignments with a prospective partner and allows project initiators to design ways to overcome them. ... You are looking for a partner with the required capabilities or resources who values the expected gains, which are not just financial rewards, but could relate to learning, inspiration, or reputation. 


Comparing Different Vector Embeddings

In the simplest terms, vector embeddings are numerical representations of data. They are primarily used to represent unstructured data. Unstructured data are images, videos, audio, text, molecular images and other kinds of data that don’t have a formal structure. Vector embeddings are generated by running input data through a pretrained neural network and taking the output of the second-to-last layer. Neural networks have different architectures and are trained on different data sets, making each model’s vector embedding unique. That’s why working with unstructured data and vector embeddings is challenging. Later, we’ll see how models with the same base fine-tuned on different data sets can yield different vector embeddings. The differences in neural networks also mean that we must use distinct models to process diverse forms of unstructured data and generate their embeddings. For example, you can’t use a sentence transformer model to generate embeddings for an image. On the other hand, you wouldn’t want to use ResNet50, an image model, to generate embeddings for sentences.


Could C2PA Cryptography be the Key to Fighting AI-Driven Misinformation?

The C2PA specification is an open source internet protocol that outlines how to add provenance statements, also known as assertions, to a piece of content. Provenance statements might appear as buttons viewers could click to see whether the piece of media was created partially or totally with AI. Simply put, provenance data is cryptographically bound to the piece of media, meaning any alteration to either one of them would alert an algorithm that the media can no longer be authenticated. You can learn more about how this cryptography works by reading the C2PA technical specifications. This protocol was created by the Coalition for Content Provenance and Authenticity, also known as C2PA. Adobe, Arm, Intel, Microsoft and Truepic all support C2PA, which is a joint project that brings together the Content Authenticity Initiative and Project Origin. The Content Authenticity Initiative is an organization founded by Adobe to encourage providing provenance and context information for digital media. 


Multi-modal data protection with AI’s help

First, there is a malicious mind behind the scenes thinking and scheming on how to change a given message for exfiltration. That string for exfil is not intrinsically tied to a medium: it could go out over Wi-Fi, mobile, browser, print, FTP, SSH, AirDrop, steganography, screenshot, BlueTooth, PowerShell, buried in a file, over messaging app, in a conferencing app, through SaaS, in a storage service, and so on. A mind must consciously seek a method and morph the message to a new medium with an adversary and their toolkit in mind to succeed and, in this case, to get points in the hackathon. Second, a mind is required to recognize the string in its multiple forms or modes. Classic data loss prevention (DLP) and data protection works with blades that are disconnected from one another: a data type is searched for with unique search criteria and expected sampling data type and format. These can be simple, such as credit card numbers or social security numbers in HTTP, or complex like looking for data types that look like a contract in email attachments.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - August 03, 2023

When your teammate is a machine: 8 questions CISOs should be asking about AI

There are many potential benefits that can flow from incorporating AI into security technology, according to Rebecca Herold, an IEEE member and founder of The Privacy Professor consultancy: streamlining work to shorten finish times for projects, the ability to make quick decisions, to find problems more expeditiously. But, she adds, there are a lot of half-baked instances being employed and buyers "end up diving into the deep end of the AI pool without doing one iota of scrutiny about whether or not the AI they view as the HAL 9000 savior of their business even works as promised." She also warns that when "flawed AI results go very wrong, causing privacy breaches, bias, security incidents, and noncompliance fines, those using the AI suddenly realize that this AI was more like the dark side of HAL 9000 than they had even considered as being a possibility." To avoid having your AI teammate tell you, "I'm sorry, Dave, I'm afraid I can't do that," when you are asking for results that are accurate, non-biased, privacy-protective, and in compliance with data protection requirements, Herold advises that every CISO ask eight questions


Generative AI needs humans in the loop for widespread adoption

Generative AI by itself has many positives, but it is currently a work in progress and it will need to work with humans for it to transform the world - which it is almost certain to do. This blending of man and machine is best described as “AI with humans in the loop” and it is already being widely adopted by businesses who want to cut operating costs and improve customer services, but also realise that humans will be crucial if these objectives are to be achieved. One of the sectors embracing this new normal is in financial journalism. Reuters managing director Sue Brooks announced that AI will be used to cover news stories and will create a “golden age” of news. Crucially, she also said it was vital there “was always a human in the loop to ensure total accuracy”. Reuters content now has automated time-coded transcripts and translation of many languages into English, part of the Reuters Connect service. Brooks went on to say that this meld would “free up brain power to be creative and put all these tools in your toolbox to create magical experiences for readers”.


AI chip adds artificial neurons to resistive RAM for use in wearables, drones

According to Weier Wan, a graduate researcher at Stanford University and one of the authors of the paper, published in Nature yesterday, NeuRRAM has been developed as an AI chip that greatly improves energy efficiency of AI inference, thereby allowing complex AI functions to be realized directly within battery-powered edge devices, such as smart wearables, drones, and industrial IoT sensors. "In today's AI chips, data processing and data storage happen in separate places – computing unit and memory unit. The frequent data movement between these units consumes the most energy and becomes the bottleneck for realizing low-power AI processors for edge devices," he said. To address this, the NeuRRAM chip implements a "compute-in-memory" model, where processing happens directly within memory. It also makes use of resistive RAM (RRAM), a memory type that is as fast as static RAM but is non-volatile, allowing it to store AI model weights. 


The CISO role has changed, and CISOs need to change with it

Perhaps the best way to improve security—and make the CISO’s job a little easier—is not reliant on technology. A change in culture is the best way to truly create an organization where security is top of mind. CISOs, part of upper management, but also part of the security team, are uniquely positioned to lead this change – both with other leaders and those they lead. A security-first culture requires embedding security in everything a business does. Developers should be enabled to create secure code that is free from vulnerabilities and resistant to attacks as soon as it is written, as opposed to being a consideration much later in the SDLC. Designated security champions from the developer ranks should lead this charge, acting as both coach and cheerleader. This approach means that security is not being mandated from above, but part of the team’s DNA and backed up by management. This cannot be an overnight change, and may be met with resistance. But the threat landscape is too complex, too advanced and too ubiquitous for any one person or even a small team to handle alone.


Hosting Provider Accused of Facilitating Nation-State Hacks

The allegations, whether true or not, are a reminder that cybercrime doesn't operate in a vacuum. Rather, there's a burgeoning service and support ecosystem. Services include initial access brokers who provide on-demand access to victims, botnet owners who facilitate malware-laden phishing attacks, and repacking services that make malware tougher to spot. They also include ransomware-as-a-service operators who lease their code to business partners, the affiliates who use it to infect victims, and cryptocurrency money laundering services that help criminals - operating online or off - convert their ill-gotten gains into cash. Online attackers require infrastructure for launching their attacks. Some make use of bulletproof service providers, which provide VPS and other types of hosting services in return for a promise, typically for a relatively high fee, that customers can do whatever they like. Halcyon's report alleges that Cloudzy functionally operates in a similar manner, due to a lack of proper oversight, including allowing cryptocurrency-using customers to be able to remain anonymous.


The tug-of-war between optimization and innovation in the CIO’s office

The downside of prioritizing optimization is the risk of overlooking opportunities for innovation that could have long-term impacts on the organization’s growth and relevance. Think game-changing new systems, such as AI, that increase supply chain efficiency, or automating steps in manufacturing that speeds up productivity and reduces costs at the same time. Usually, the value of a business is directly defined by the innovations that can drive it. Think about the services we use now, from food delivery to home sharing, with the draw being better customer experiences through innovation. Emphasizing innovation enables companies to stay ahead of the curve, attracting customers with cutting-edge products and services. ... These mistakes will kill a company. Taking resources away from innovation and spending them on making things work as they should removes business value. I think we’re going to see a great many businesses spend so much money to fix past mistakes that they’ll end up throwing in the towel. 


Flight to cloud drives IaaS networking adoption

IDC describes IaaS cloud networking as a foundational networking layer that allows large enterprises and technology providers to connect data centers, colocation environments, and cloud infrastructure. With IaaS networking, the network infrastructure and services are scalable and available on-demand, provisioned and consumed just like any other cloud service. That makes this infrastructure more scalable and agile than traditional approaches to networking, according to IDC. Direct cloud connects/interconnects is the largest segment of IaaS networking, accounting for more than half of all IaaS networking revenue. The four other major segments of the IaaS networking market are cloud WAN (transit), IaaS load balancing, IaaS service mesh, and cloud VPNs (to IaaS clouds), according to IDC. Cloud WAN, which includes cloud middle-mile and core transit networks, is the fastest-growing segment of IaaS networking, with a forecasted five-year compound annual growth rate of 112%, says IDC. IaaS service meshes are also expected to see strong growth, with a forecasted five-year compound annual growth rate of 68%.


The rise of Generative AI in software development

AI is accelerating the process of going from zero to one – it jumpstarts innovation, releasing developers from the need to start from scratch. But the 1 to n problem remains – they start faster but will quickly have to deal with issues like security, governance, code quality, and managing the entire application lifecycle. The largest cost of an application isn't creating it – it's maintaining it, adapting it, and ensuring it will last. And if organisations were already struggling with tech debt (code left behind by developers who quit, vendors who sunset apps and create monstrous workloads to take care of) now they'll also have to handle massive amounts of AI-generated code that their developers may or may not understand. As tempting as it may be for CIOs to assume they can train teams on how to prompt AI and use it to get any answers they need, it might be more efficient to invest in technologies that help you leverage Gen AI in ways that you can actually see, control and trust. This is why I believe that in the future, fundamentally, everything will be delivered on top of AI-powered low-code platforms. 


Will law firms fully embrace generative AI? The jury is out | The AI Beat

On one hand, gen AI is shaking up the legal industry, with companies like Everlaw adding options to their product portfolio, while Thomson Reuters can integrate with Microsoft 365 Copilot to power legal content generation directly in Word. On the other hand, lawyers tend to be a conservative bunch — and in this case, attorneys would likely be wise to be cautious, with headlines like “New York lawyers sanctioned for using fake ChatGPT cases in legal brief” going viral. Another problem is that their clients may not feel comfortable with law firms using gen AI — a new survey found that one-third of consumer respondents said they’re against any use of gen AI in the legal field. ... But with Everlaw’s new gen AI now available in beta, lawyers can go beyond just clustering data at the aggregate level to querying, summarizing and otherwise extracting details from documents to get what they need. For example, the company says that while it typically takes hours for a legal professional to compose a statement of facts, it can now happen in about 10 seconds, delivering legal teams a rough draft to edit and fact check. 


Vulnerability Management: Best Practices for Patching CVEs

In a perfect world, you would analyze all CVEs first to determine the priority order for patching. But this just isn’t scalable due to the sheer number of vulnerabilities and how frequently CVEs are discovered. In reality, only a handful of CVEs actually affect your software. Of course, there’s no way to know for certain how a CVE affects your application until it has been analyzed, but because there are so many, including those from transitive dependencies, it is nearly impossible to analyze them all before new CVEs are discovered or in the time between a tight release schedule. Instead, we recommend you start by patching all critical and high-severity CVEs without analysis. ... Preventing, detecting and patching CVEs needs to be a shared responsibility between developers and security teams. It is not sustainable for security teams to bear the responsibility of managing and patching CVEs alone. Development teams can often be hesitant to push frequent updates for fear that updates to software libraries will create bugs in their software.



Quote for the day:

"Our greatest battles are with our own minds." -- Jameson Frank

Daily Tech Digest - August 02, 2023

Return-to-office mandates rise as worker productivity drops

In the first quarter of 2023, labor productivity dropped 2.1% in the US, even as the number of hours worked increased by 2.6%, according to the BLS. The highest levels of remote workers are in North America and Northern Europe, with lower levels in Southern Europe, and even fewer still in Asia — particularly in developing countries, according to a study by Stanford University’s Institute for Economic Policy Research (SIEPR) released in July. ... “Bosses want workers back in the office; workers want flexibility,” said Peter Miscovich, the managing director of Jones Lang LaSalle IP (JLL), a global real estate investment and management firm that tracks remote work trends. But current return-to-office mandates haven't always been effective and they risk driving employees away, according to Miscovich. "Given current low-unemployment rates — particularly in technology fields — talent has the upper hand and will have the upper hand over the next 10 to 15 years,” Miscovich said. While some companies have drawn attention for heavy-handed tactics to get employees back to the office, others are succeeding for getting buy-in for structured hybrid work policies.


IT professionals: avoiding bad days at work

The most common cause of stress is work-related, with one recent study showing that 79% of UK professionals say they frequently feel stressed and our own research revealed that over two-thirds of IT leaders(70%) reported that there is pressure to deliver security protection in a short amount of time. Whilst organisations must be able to identify the sources of stress to support their people, unfortunately, it must be noted that due to the nature of working with technology, IT professionals will encounter stressful situations – whether the solution is to turn it off and on again or something much more serious. Having the right mix of people, processes and technology will assist in minimising these situations; however, when they do occur, it is vital that leaders are able to recognise these situations and support their people This comes back to ensuring the most appropriate technology is in place, along with having clear plans and processes in place to best support the needs of the organisation, its people and its customers.


Why synthetic data is a must for AI in telecom

Synthetic data reflects real-world data both mathematically and statistically. But rather than being collected from and measured in the real world, it is created by computer simulations, algorithms, simple rules, statistical modeling, simulation and other techniques based on small, anonymized real-world samples. “While real data is almost always the best source of insights from data, real data is often expensive, imbalanced, unavailable or unusable due to privacy regulations,” Gartner VP analyst Alexander Linden said in a Q&A blog post. “Synthetic data can be an effective supplement or alternative to real data.” Artificial data can help mitigate weaknesses in real data or can be used when no live data exists, when data is highly sensitive or otherwise biased, or can’t be used, shared or moved. But it doesn’t always have to be trained on real data, however: It can be generated just by looking at domain or institutional knowledge or traces of real data. With the massive explosion in the use of data-hungry generative AI models and the necessity of privacy and security, enterprises across industry segments are recognizing the potential in synthetic data


DDoS Attacks and the Cyber Threatscape

Occasionally, DDoS attacks were carried out to extort ransom payments, colloquially known as Ransom DDoS (RDDoS) attacks. The RDDoS attack should not be mistaken for ransomware, which may be driven by similar motivations but employs different tactics, techniques, and procedures (TTPs). The operational method in ransomware requires ‘denial of data’ by a malicious script, whereas RDDoS involves denial of service, generally by a botnet. Running a ransomware operation requires access to internal systems, which is not the case in ransom DDoS attacks. In RDDoS, threat actors leverage the threat of denial of service to conduct extortion, which may include sending a private message by email demanding ransom amount to prevent the organisation from being targeted by a DDoS attack. According to a threat intelligence report, throughout the 2020–2021 global RDDoS campaigns, attacks ranged from few hours up to several weeks with attack rates of 200 Gbps and higher. The DDoS attack can also serve as a means of reconnaissance, allowing attackers to assess the target’s vulnerabilities and gauge the strength of its defenses.


MDM’s Role in Strengthening Data Governance Practices

Ensuring regulatory compliance and the trustworthiness of data is paramount. This is where a systematic process comes into play, and Gartner MDM is leading the way in providing a comprehensive solution. With the ability to configure data governance policies, capture metadata, and perform data lineage, Gartner MDM allows for a full understanding of data assets and their use. This translates into improved compliance, reduced risk, and enhanced data trustworthiness. By implementing a systematic process that includes Gartner MDM, organizations can confidently navigate the complex landscape of regulatory requirements, safeguard data integrity, and ultimately increase customer trust. ... Data Governance has become essential with the ever-increasing amount of data organizations generate. However, manually reviewing and managing such a large amount of data can be challenging and time-consuming. This is where automation techniques come into play. By automating data governance processes, organizations can streamline the process, reduce errors, and make better decisions resulting from the data. 


Delivering privacy in a world of pervasive digital surveillance: Tor Project’s Executive Director speaks out

Our stance is clear, we think that encryption is a right – which is why it is built into our technology. As more and more aspects of our lives are carried out digitally, whether it is conducting financial transactions, accessing health care services or staying in touch with friends and loved ones, our online activity should be governed by the same rights to privacy and anonymity as our analog experiences. As part of our work, the Tor Project is currently active in the debate around the need to safeguard EE2E. We are engaged in advocacy work on the issue and have supported other organizations in their efforts to raise awareness, especially as part of the Global Encryption Coalition. ... Earlier this year, we launched the Mullvad Browser, a free, privacy-preserving browser offering similar protections as Tor Browser without the Tor network. Mullvad Browser is another option for internet users who are looking for a privacy-focused browser that doesn’t need a bunch of extensions and plugins to enhance their privacy and reduce the factors that can accidentally de-anonymize themselves.


The Debate Around AI Ethics in Australia is Falling Far Behind

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned. ... The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core. “Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. ...”


Vigilance advised if using AI to make cyber decisions

Artificial intelligence (AI) and machine learning (ML) driven tools and technologies are on the rise to help organizations address these challenges by significantly improving their security posture efficiently and effectively. Tools using ML and AI are improving accuracy and speed of response. ... The vendor may have utilised AI in various product development stages. For instance, AI could have been employed to shape the requirements and design of the product, review its design or even generate source code. Additionally, AI might have been used to select relevant open-source code, develop test plans, write the user guide or create marketing content. In some cases, AI could be a functional product component. However, it’s important to note that sometimes an AI capability might really be machine learning (ML). Determining the legitimacy of AI claims can be challenging: the vendor’s transparency and supporting evidence are crucial. Weighing the vendor’s reputation, expertise and track record in AI development is vital for distinguishing authentic AI-powered products from “snake oil.”


3 GitOps Myths Busted

It is highly likely that as your organization embarks on its cloud native journey, there will come a point where scaling to multiclusters becomes necessary. For instance, developers may need to work on and test applications before making pull requests without having direct access to the production code, of course, for applications running in production on Kubernetes. Moreover, in certain scenarios, a team might manage multiple clusters and distribute workloads among them to ensure sufficient fault tolerance and availability. For example, when running a machine learning training workload, the team might increase the number of replicas or cluster replicas to meet specific demands. Additionally, different clusters may be deployed across various physical locations in cloud environments, whether on Amazon Web Services, Azure, GCP and others, requiring separate tools and processes to align with geographic mandates, legal restrictions, compliance requirements, and data access policies.


Simplifying IT strategy: How to avoid the annual planning panic

In developing your strategy, you have two responsibilities related to the finances of any proposed project: First, you must articulate the costs and benefits of the project; and second, you must contextualize those costs and benefits by comparing them to overall budget projections, which should include multi-year projections that align with the needs and norms of your finance organization. Not sure how to frame the numbers? Borrow revenue projections from FP&A, then layer in projected IT run-rate spend, IT project spend for each year in the forecast, and summarize total IT spend as a percentage of revenue. Hint: Be ready to explain any increase in this metric. ... What will you need from others for your plan to succeed? Dedicated resources from BUs and functions? Participation in steering committees? Incremental funding? The point is you can’t drive a transformation alone. Key to success will be clarifying roles and responsibilities and ensuring others have skin in the game. ... Once you’ve tried answering the questions, consult your deputies. Test and refine your hypothesis as a group. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - August 01, 2023

Is generative AI mightier than the law?

The FTC hasn’t been shy in going after Big Tech. And in the middle of July, it took its most important step yet: It opened an investigation into whether Microsoft-backed OpenAI has violated consumer protection laws and harmed consumers by illegally collecting data, violating consumer privacy and publishing false information about people. In a 20-page letter sent by the FTC to OpenAI, the agency said it’s probing whether the company “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.” The letter made clear how seriously the FTC takes the investigation. It wants vast amounts of information, including technical details about how ChatGPT gathers data, how the data is used and stored, the use of APIs and plugins, and information about how OpenAI trains, builds, and monitors the Large Language Models (LLMs) that fuel its chatbot. None of this should be a surprise to Microsoft or ChatGPT. In May, FTC Chair Lina Khan wrote an opinion piece in The New York Times laying out how she believed AI must be regulated.


Four Pillars of Digital Transformation

The four principle was understanding your customers and your customer segments. that's number one. Second is aligning with your customers and your functional teams. Because you cannot do anything digital transformation in a silo. You can say, "Oh, Asif and Shane wants to digital transform this company and forget about what people A and people B are thinking. Shane and I are going to go and make that happen." We will fail. Not going to happen. That's where the cross-functional team alignment comes in. The third is influencing and understanding what the change is, why we want to do it, how we are going to do it. And what's in it, not for Shane, not for Asif. What's in it for you as a customer or as an organization? Again, showing the empathy and explaining the why behind it. And finally, communicating, communicating, communicating, over-communicating and celebrating success. To me, those are the big four pillars that we use to sell the idea of what the digital transformation is to at any level from a C-level, all the way to people at the store level. Can I explain them what's in it for them?


Scientists Seek Government Database to Track Harm from Rising 'AI Incidents'

Faced with mounting evidence of such harmful AI incidents, the FAS noted the government database could somewhat align with other trackers and efforts. "The database should be designed to encourage voluntary reporting from AI developers, operators, and users while ensuring the confidentiality of sensitive information," the FAS said. "Furthermore, the database should include a mechanism for sharing anonymized or aggregated data with AI developers, researchers, and policymakers to help them better understand and mitigate AI-related risks. The DHS could build on the efforts of other privately collected databases of AI incidents, including the AI Incident Database created by the Partnership on AI and the Center for Security and Emerging Technologies. This database could also take inspiration from other incident databases maintained by federal agencies, including the National Transportation Safety Board's database on aviation accidents." The group further recommended that the DHS should collaborate with the NIST to design and maintain the database, including setting up protocols for data validation categorization, anonymization, and dissemination.


Generative AI: Headwind or tailwind?

It's topping everyone's wish list, with influences converging from various directions: customers, staff members and corporate boards, all applying pressure to harness its potential in their respective markets. On the bright side, there's a unified objective: to make progress. The challenge, however, is that, like most early-stage technologies, the path forward with generative AI isn't as straightforward — there's a lot of ambiguity about what to do, how to do it or even where to start. The potential of generative AI surpasses mere cost-effectiveness and efficiency. It can fuel the generation of new ideas, fine-tune designs and facilitate the launch of new products. It could serve as your catalyst for innovation if you're bold enough to step into this new frontier. But where do you step first? Our approach is first to identify a problem or "missing." In the simplest explanation possible, envision a Venn diagram where one circle represents the new tech wave (generative AI) and the other represents your customer, their challenges, opportunities, tasks, pains and gains.


Hackers: We won’t let artificial intelligence get the better of us

Hackers who have adopted or who plan to adopt generative AI are most inclined to use Open AI’s ChatGPT ... Those that have taken the plunge are using generative AI technology in a wide variety of ways, with the most commonly used functions being text summarisation or generation, code generation, search enhancement, chatbots, image generation, data design, collection or summarisation, and machine learning. Within security research workflows specifically, hackers said they found generative AI most useful to automate tasks, analyse data, and identify and validate vulnerabilities. Less widely used applications included conducting reconnaissance, categorising threats, detecting anomalies, prioritising risk and building training models. Many hackers who are not native English speakers or not fluent in English are also using services such as ChatGPT to translate or write reports and bug submissions, and fuel more collaboration across national borders.


Your CDO Does More Than Just Protect Data

Influential CDOs who can collaborate without being perceived as aloof or arrogant stand out in the field. Balancing visionary thinking with practical implementation strategies is vital, and CDOs who instill purpose and forward-looking excitement within their teams create a culture of innovation and continuous improvement. These qualities are essential for unlocking the full potential of data leadership. ... With boards needing more depth of tech knowledge to oversee strategy, CDOs can be valuable directors. CDOs who can demonstrate experience in making data core to the company’s strategy or informing a transformational pivot for the business would bring a high amount of value to boardroom discussions. The opportunity to understand and see risks and opportunities through a board member’s eyes is an invaluable experience for a CDO, which not only helps the CDO to prepare for future board service but also gives your board members additional education about the future of data and what it can bring to your organization.


Data Warehouse Telemetry: Measuring the Health of Your Systems

At the heart of the data warehouse system, there is a pulse. This is the set of measures that indicate the system's performance -- its heartbeat, so to speak. This includes the measurements of system resources, such as disk reads and writes, CPU and memory utilization, and disk usage. These metrics are an indicator of how well the overall system is performing. It is important to measure to make sure that these metrics do not go too low or too high. When they go too low, it is an indicator that the system has been oversized and resources are being wasted. When they go too high, it is an indicator that the system is undersized and resources are nearing exhaustion. As the resources hit a critical level, overall performance can grind to a halt, freezing processes and negatively impacting the user experience. When a medical practitioner sees that a patient’s heart rate/pulse is too fast or too slow, they will provide several recommendations, including ongoing monitoring to see if the situation improves or changes to diet or exercise.


Reducing Generative AI Hallucinations and Trusting Your Data

Data has two dimensions. One is the actual value of the data and the parameter that it represents; for example, the temperature of an asset in a factory. Then, there is also the relational aspect of the data that shows how the source of that temperature sensor is connected to the rest of the other data generators. This value-oriented aspect of data and the relational aspect of that data are both important for quality, trustworthiness, and the history and revision and versioning of the data. There’s obviously the communication pipeline, and you need to make sure that where the data sources connect to your data platform has enough sense of reliability and security. Make sure the data travels with integrity and the data is protected against malicious intent. ... Generative AI is one of those foundational technologies like how software changed the world. Mark [Andreesen, a partner in the Silicon Valley venture capital firm Andreessen Horowitz] in 2011 said that software is eating the world, and software already ate the world. It took 40 years for software to do this. 


10 Reasons for Optimism in Cybersecurity

The new National Cybersecurity Strategy announced by the Biden Administration this year emphasizes the importance public-private collaboration. Google Cloud’s Venables anticipates that knowledge sharing between the public and private sectors will help enhance transparency around cyber threats and improve protection. “As public and private sector collaboration grows, in the next few years we’ll see deeper coordination between agencies and big tech organizations in how they implement cyber protections,” he says. The public and private sectors also have the opportunity to join forces on cybersecurity regulation. ... As the cybersecurity product market matures it will not only embrace secure-by-design and -default principles. XYPRO’s Tcherchian is also optimistic about the consolidation of cybersecurity solutions. “Cybersecurity consolidation integrates multiple cybersecurity tools and solutions into a unified platform, addressing the crowded and complex nature of the cybersecurity market,” he explains. 


Keeping the cloud secure with a mindset shift

Organizations developing software through cloud-based tools and environments must take additional care to adapt their processes. Adapting a “shift-left” approach for the continuous integration and continuous deployment CI/CD pipeline is particularly important. Traditionally, security checks were often performed towards the end of the development cycle. However, this reactive approach can allow vulnerabilities to slip through the cracks and reach production stages. The shift-left approach advocates for integrating security measures earlier in the development cycle. By doing so, potential security risks can be identified and mitigated early, preventing malware infiltration and reducing the cost and complexity of addressing security issues at later stages. This proactive approach aligns with the dynamic nature of cloud environments, ensuring robust security without hindering agility and innovation. Businesses should consider how they can mirror the shift-left ethos across their other cloud operations.



Quote for the day:

"Leadership offers an opportunity to make a difference in someone's life, no matter what the project." -- Bill Owens

Daily Tech Digest - July 31, 2023

The open source licensing war is over

Too many open source warriors think that the license is the end, rather than just a means to grant largely unfettered access to the code. They continue to fret about licensing when developers mostly care about use, just as they always have. Keep in mind that more than anything else, open source expands access to quality software without involving the purchasing or (usually) legal teams. This is very similar to what cloud did for hardware. The point was never the license. It was always about access. Back when I worked at AWS, we surveyed developers to ask what they most valued in open source leadership. You might think that contributing code to well-known open source projects would rank first, but it didn’t. Not even second or third. Instead, the No. 1 criterion developers used to judge a cloud provider’s open source leadership was that it “makes it easy to deploy my preferred open source software in the cloud.” ... One of the things we did well at AWS was to work with product teams to help them discover their self-interest in contributing to the projects upon which they were building cloud services, such as Elasticache.


Navigate Serverless Databases: A Guide to the Right Solution

One of the core features of Serverless is the pay-as-you-go pricing. Almost all Serverless databases attempt to address a common challenge: how to provision resources economically and efficiently under uncertain workloads. Prioritizing lower costs may mean consuming fewer resources. However, in the event of unexpected spikes in business demand, you may have to compromise user experience and system stability. On the other hand, more generous and secure resource provisioning leads to resource waste and higher costs. Striking a balance between these two styles requires complex and meticulous engineering management. This would divert your focus from the core business. Furthermore, the Pay-as-you-go billing model has varying implementations in different Serverless products. Most Serverless products offer granular billing based on storage capacity and read/write operations per unit. This is largely possible due to the distributed architecture that allows finer resource scaling. 


Building a Beautiful Data Lakehouse

It’s common to compensate for the respective shortcomings of existing repositories by running multiple systems, for example, a data lake, several data warehouses, and other purpose-built systems. However, this process frequently creates a few headaches. Most notably, data stored in one repository type is often excluded from analytics run on another, which is suboptimal in terms of the results. In addition, having multiple systems requires the creation of expensive and operationally burdensome processes to move data from lake to warehouse if required. To overcome the data lake’s quality issues, for example, many often use extract/transform/load (ETL) processes to copy a small subset of data from lake to warehouse for important decision support and BI applications. This dual-system architecture requires continuous engineering to ETL data between the two platforms. Each ETL step risks introducing failures or bugs that reduce data quality. Second, leading ML systems, such as TensorFlow, PyTorch, and XGBoost, don’t work well on data warehouses. 


How the best CISOs leverage people and technology to become superstars

Exemplary CISOs are also able to address other key pain points that traditionally flummox good cybersecurity programs, such as the relationships between developers and application security (AppSec) teams, or how cybersecurity is viewed by other C-suite executives and the board of directors. For AppSec relations, good CISOs realize that developer enablement helps to shift security farther to the so-called left and closer to a piece of software’s origins. Fixing flaws before applications are dropped into production environments is important, and much better than the old way of building code first and running it past the AppSec team at the last minute to avoid those annoying hotfixes and delays to delivery. But it can’t solve all of AppSec’s problems alone. Some vulnerabilities may not show up until applications get into production, so relying on shifting left in isolation to catch all vulnerabilities is impractical and costly. There also needs to be continuous testing and monitoring in the production environment, and yes, sometimes apps will need to be sent back to developers even after they have been deployed. 


TSA Updates Pipeline Cybersecurity Directive to Include Regular Testing

The revised directive, developed with input from industry stakeholders and federal partners including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, will “continue the effort to reinforce cybersecurity preparedness and resilience for the nation’s critical pipelines”, the TSA said. Developed with input from industry stakeholders and federal partners, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, the reissued security directive for critical pipeline companies follows the initial directive announced in July 2021 and renewed in July 2022. The TSA said that the requirements issued in the previous years remain in place. According to the 2022 security directive update, pipeline owners and operators are required to establish and execute a TSA-approved cybersecurity implementation plan with specific cybersecurity measures, and develop and maintain a CIRP that includes measures to be taken during cybersecurity incidents. 


What is the cost of a data breach?

"One particular cost that continues to have a major impact on victim organizations is theft/loss of intellectual property," Glenn J. Nick, associate director at Guidehouse, tells CSO. "The media tend to focus on customer data during a breach, but losing intellectual property can devastate a company's growth," he says. "Stolen patents, engineering designs, trade secrets, copyrights, investment plans, and other proprietary and confidential information can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company." It's important to note that how a company responds to and communicates a breach can have a large bearing on the reputational impact, along with the financial fallout that follows, Mellen says. "Understanding how to maintain trust with your consumers and customers is really, really critical here," she adds. "There are ways to do this, especially around building transparency and using empathy, which can make a huge difference in how your customers perceive you after a breach. If you try to sweep it under the rug or hide it, then that will truly affect their trust in you far more than the breach alone."


Meeting Demands for Improved Software Reliability

“Developers need to fix bugs, address performance regressions, build features, and get deep insights about particular service or feature level interactions in production,” he says. That means they need access to necessary data in views, graphs, and reports that make a difference to their workflows. “However, this data must be integrated and aligned with IT operators to ensure teams are working across the same data sets,” he says. Sigelman says IT operations is a crucial part of an organization’s overall reliability and quality posture. “By working with developers to connect cloud-native systems such as Kubernetes with traditional IT applications and systems of record, the entire organization can benefit from a centralized data and workflow management pane,” he says. From this point, event and change management can be combined with observability instruments, such as service level objectives, to provide not only a single view across the entire IT estate, but to demonstrate the value of reliability to the entire organization.


How will artificial intelligence impact UK consumers lives?

In the next five years, I expect we may see a rise in new credit options and alternatives, such as “Predictive Credit Cards,” where AI anticipates a consumer’s spending needs based on their past behaviour and adjusts the credit limit or offers tailored rewards accordingly. Additionally, fintechs are likely to integrate Large Language Models (LLMs) and add AI to digital and machine-learning powered services. ... Through AI, consumers may also be able to access a better overview of their finances, specifically personalised financial rewards, as they would have access to tools to review all transactions, receive recommendations on personalised spend-based rewards, and even benchmark themselves against other cardholders in similar demographics or industry standards. Consumers may also be able to ask questions and get answers at the click of a button, for example, ‘How much debt do I have compared to your available credit limits?’ or ‘What’s the best way to use my rewards points based on my recent purchases?’ improving financial literacy and potentially providing them with more spending/saving power and personalised experiences in the long run.


IT Strategy as an Enterprise Enabler

IT Strategy is a plan to create an Information Technology capability for maximizing the business value for the organization. IT capability is the Organization ability to meet business needs and improve business processes using IT based systems. The Objective of IT strategy is to spend least amount of resources and generates better ROI. It helps in setting the direction for an IT function in an organization. A successful IT strategy helps the organizations to reduce the operational bottlenecks, realize TCO and derive value from technology. ... IT Strategy definition and implementation covers the key aspects of technology management, planning, governance, service management, risk management, cost management, human resource management, hardware and software management, and vendor management. Broadly, IT Strategy has 5 phases covering Discovery, Assess, Current IT, Target IT and Roadmap. Idea of IT Strategy is to keep the annual and multiyear plan usual, insert the regular frequent check-ins along the way. Revisit IT Strategy for every quarterly or every 6 months to ensure that optimal business value created. 


AI system audits might comply with local anti-bias laws, but not federal ones

"You shouldn’t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in Illinois in 2020 when they came out with the facial recognition act in employment, which basically said if you’re going to use facial recognition technology during an interview to assess if they’re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose. "You can see how fragmented the laws are, where Illinois is saying we’re going to worry about this one aspect of an application for facial recognition in an interview setting. ... "You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it’s simply assisting you with information about a candidate or employer that otherwise you wouldn’t have been able to ascertain without advanced machine learning looking for patterns that a human couldn’t have fast enough.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford