Daily Tech Digest - June 10, 2024

AI vs humans: Why soft skills are your secret weapon

AI can certainly assist with some aspects of the creative process, but true creativity is something only humans can achieve, for several reasons. Firstly, it often involves intuition, emotion and empathy, as well as thinking outside the box and making connections between seemingly unrelated concepts. Creativity is often shaped by personal experiences and cultural background, making every individual’s creative work unique. ... Leadership and strategic management will continue to be driven by humans. When making decisions, people are able to consider various factors such as personal relationships or company culture. General awareness, intuition, understanding of broader contexts that lie beyond data and effective communication skills are all human traits. ... Humans possess a crucial trait that AI is unable to replicate (although it’s definitely coming closer): Empathy. AI can’t communicate with your team members at the same level, provide solutions to their problems or offer a listening ear when necessary. Managing a team means talking to people, listening and understanding their needs and motivations. The human touch is essential to make sure that everyone is on the same page. 


How to Avoid Pitfalls and Mistakes When Coding for Quality

When code quantity is so exaggerated that redundancies emerge, "code bloat" occurs. An abundance of unnecessary code can adversely affect the site's performance and the code can become too complex to maintain. There are strategies for addressing redundancy; however, as code is implemented, it is crucial for it to be modularized or broken down into smaller modular complements with the proper encapsulation and extraction. Code that is modularized promotes reuse, simplifies maintenance, and keeps the size of the code base in check. ... There is a tendency to "reinvent the wheel" when writing code. A more practical solution is to reuse libraries whenever possible because they can be utilized within different parts of the code. Sometimes, code bloat results from a historically bloated code base without an easy option to conduct modularization, extraction, or library reuse. In this case, the most effective strategy is to turn to code refactoring. Regularly take initiatives to refactor code, eliminate any unnecessary or duplicate logic, and improve the overall code structure of the repository over time. 


The BEC battleground: Why zero trust and employee education are your best line of defence

Even with extensive employee training, some BEC scams can bypass human vigilance. Comprehensive security processes are essential to minimize their impact. The zero-trust security model is crucial here. It assumes no inherent trust for anyone, inside or outside the network. With zero trust, every user and device must be continuously authenticated before accessing any resources. This makes it much harder for attackers. Even if they steal a login credential, they can’t automatically access the entire system. A key component of zero trust is multi-factor authentication (MFA) which acts as multiple locks on every access point. Just like a physical security system requiring multiple forms of identification, MFA requires not just a username and password, but an additional verification factor like a code from a phone app or fingerprint scan. This makes unauthorised entry, including through BEC scams, much harder. So, any IT infrastructure implemented must have zero trust and MFA at its core. A complement to zero trust is the principle of least privilege access; granting users only the minimum level of access required to perform their jobs. 


Why CISOs need to build cyber fault tolerance into their business

For a rapidly evolving technology like GenAI, it is impossible to prevent all attacks at all times. The ability to adapt to, respond, and recover from inevitable issues is critical for organizations to explore GenAI successfully. Therefore, effective CISOs are complementing their prevention-oriented guidance for GenAI with effective response and recovery playbooks. Regarding third-party cybersecurity risk management, no matter the cybersecurity function’s best efforts, organizations will continue to work with risky third parties. Cybersecurity’s real impact lies not in asking more due diligence questions, but in ensuring the business has documented and tested third-party-specific business continuity plans in place. “CISOs should be guiding the sponsors of third-party partners to create a formal third-party contingency plan, including things like an exit strategy, alternative suppliers list, and incident response playbooks,” said Mixter. “CISOs tabletop everything else. It’s time to bring tabletop exercises to third-party cyber risk management.”


AI system poisoning is a growing threat — is your security regime ready?

CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems. ... Security experts and CISOs themselves say many organizations are not prepared to detect and respond to poisoning attacks. “We’re a long way off from having truly robust security around AI because it’s evolving so quickly,” Stevenson says. He points to the Protiviti client that suffered a suspected poisoning attack, noting that workers at that company identified the possible attack because its “data was not synching up, and when they dived into it, they identified the issue. [The company did not find it because] a security tool had its bells and whistles going off.” He adds: “I don’t think many companies are set up to detect and respond to these kinds of attacks.” ... “The average CISO isn’t skilled in AI development and doesn’t have AI skills as a core competency,” says Jon France, CISO with ISC2. Even if they were AI experts, they would likely face challenges in determining whether a hacker had launched a successful poisoning attack.


Accelerate Transformation Through Agile Growth

The problem is that when you start the next calendar year in January, you get a false sense of confidence because December is still 12 months away — all the time in the world, or so it seems, to execute your annual strategic plan. But then by April, after the first quarter has ended, chances are you’ll have started to feel a bit behind. You won’t be overly worried, however; you know you still have plenty of time to catch up. But then you’ll get to September and hit the 100-day-sprint which typically comes right after Labor Day in the United States. Now, panic will set in as you race to the end of the year desperately trying to hit those annual goals that were established all the way back in January. In growth cycles longer than 90 days, we tend to get off track. But it doesn’t have to be this way. You can use the 90-Day Growth Method to bring your team together every quarter to review and celebrate your progress over the past 90 days, refocus on goals and actions, and renew your commitment to achieving them. Soon, you and your team will feel re-energized and ready to move forward with courage and confidence for the next 90 days.


We need a Red Hat for AI

To be successful, we need to move beyond the confusing hype and help enterprises make sense of AI. In other words, we need more trust (open models) and fewer moving parts ... OpenAI, however popular it may be today, is not the solution. It just keeps compounding the problem with proliferating models. OpenAI throws more and more of your data into its LLMs, making them better but not any easier for enterprises to use in production. Nor is it alone. Google, Anthropic, Mistral, etc., etc., all have LLMs they want you to use, and each seems to be bigger/better/faster than the last, but no clearer for the average enterprise. ... You’d expect the cloud vendors to fill this role, but they’ve kept to their preexisting playbooks for the most part. AWS, for example, has built a $100 billion run-rate business by saving customers from the “undifferentiated heavy lifting” of managing databases, operating systems, etc. Head to the AWS generative AI page and you’ll see they’re lining up to offer similar services for customers with AI. But LLMs aren’t operating systems or databases or some other known element in enterprise computing. They’re still pixie dust and magic.


How Data Integration Is Evolving Beyond ETL

From an overall trend perspective, with the explosive growth of global data, the emergence of large models, and the proliferation of data engines for various scenarios, the rise of real-time data has brought data integration back to the forefront of the data field. If data is considered a new energy source, then data integration is like the pipeline of this new energy. The more data engines there are, the higher the efficiency, data source compatibility, and usability requirements of the pipeline will be. Although data integration will eventually face challenges from Zero ETL, data virtualization, and DataFabric, in the visible future, the performance, accuracy, and ROI of these technologies have always failed to reach the level of popularity of data integration. Otherwise, the most popular data engines in the United States should not be SnowFlake or DeltaLake but TrinoDB. Of course, I believe that in the next 10 years, under the circumstances of DataFabric x large models, virtualization + EtLT + data routing may be the ultimate solution for data integration. In short, as long as data volume grows, the pipelines between data will always exist.


Protecting your digital transformation from value erosion

The first form of value erosion pertains to cost increases within your project without an equivalent increase in the value or activities being delivered. With project delays, for example, there are usually additional costs incurred related to resource carryover because of the timeline increase. In this instance, the absence of additional work being delivered, or future work being pulled forward to offset the additional costs, is a prime illustration of value erosion. ... Decrease in value without decreased costs: A second form occurs when there’s a decrease in value without a cost adjustment. This can happen due to changing business priorities or project delays, especially within the build phase. As an alternative to extending the project timeline, organizations may decide to prioritize and reduce features to meet deadlines. ... Failure to Identify and plan for potential risks leaves projects vulnerable to unforeseen complications and budgetary concerns. Large variances in initial SI responses can be attributed to different assumptions on scope and service levels provided. 


Ask a Data Ethicist: What Is Data Sovereignty?

Put simply, data sovereignty relates to who has the power to govern data. It determines who is legally empowered to make decisions about the collection and use of data. We can think about this in the context of two governments negotiating between each other, each having sovereign powers of self-determination. Indigenous governments are claiming their sovereign rights to their people’s data. On the one hand, this is a response to the atrocities that have taken place with respect to data gathered and taken beyond the control of Indigenous communities by researchers, governments, and other non-Indigenous parties. Yet, as data becomes increasingly important, many countries are seeking to set regulatory standards for data. It makes sense the Indigenous governments would assert similar rights with respect to their people’s data. ... Data sovereignty is an important part of Canada’s Truth and Reconciliation calls to action. The FNIGC governs the relevant processes for those seeking to work with First Nations in Canada to appropriately access data.



Quote for the day:

"The secret to success is good leadership, and good leadership is all about making the lives of your team members or workers better." -- Tony Dungy

Daily Tech Digest - June 09, 2024

AI Systems Are Learning to Lie and Deceive

Put another way, as Park explained in a press release: "We found that Meta’s AI had learned to be a master of deception." "While Meta succeeded in training its AI to win in the game of Diplomacy," the MIT physicist said in the school's statement, "Meta failed to train its AI to win honestly." In a statement to the New York Post after the research was first published, Meta made a salient point when echoing Park's assertion about Cicero's manipulative prowess: that "the models our researchers built are trained solely to play the game Diplomacy." Well-known for expressly allowing lying, Diplomacy has jokingly been referred to as a friendship-ending game because it encourages pulling one over on opponents, and if Cicero was trained exclusively on its rulebook, then it was essentially trained to lie. Reading between the lines, neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they've either been trained or jailbroken to do so. That's good news for those concerned about AI developing sentience — but very bad news if you're worried about someone building an LLM with mass manipulation as a goal.


AI search answers are the fast food of your information diet – convenient and tasty

These AI features vacuum up information from the internet and other available sources and spit out an answer based on how they are trained to associate words. A core argument against them is that they mostly remove from the equation the user’s judgment, agency and opportunity to learn. This may be OK for many searches. Want a description of how inflation has affected grocery prices in the past five years, or a summary of what the European Union AI Act includes? AI Overviews can be a good way to cut through a lot of documents and extract those specific answers. But people’s searching needs don’t end with factual information. They look for ideas, opinions and advice. Looking for suggestions about how to keep the cheese from sliding off your pizza? Google will tell you that you should add some glue to the sauce. Or wondering if running with scissors has any health benefits? Sure, Google will say, “it can also improve your pores and give you strength”. While a reasonable user can understand that such outrageous answers are likely to be wrong, it’s hard to detect that for factual questions.


Future of biometric payments and digital ID taking shape

Japan is adding support for digital wallets to its My Number national digital ID system, starting with Apple Wallet next spring. The passage of a new law updates the My Number system enables this first step for Apple Wallet IDs outside of the U.S. Lawmakers envisage the use of My Numbers on smartphones for a wide range of public and private sector interactions. ... eEstonia Digital Transformation Adviser Erika Piirmets tells Biometric Update in an interview that the country’s mature digital government makes high uptake of its EU Digital Identity Wallet quite likely. Piirmets explains the evolving ecosystem of ID credentials in Estonia, the country’s work to enable cross-border interoperability, and ongoing work to bring electronic voting to mobile devices. ... Mobile driver’s licenses represent a major opportunity to move towards decentralized digital identity, but a panel at Identiverse hosted by OpenID Foundation notes that complex standards need to be orchestrated. For mDLs and digital wallets to be adopted, they need to be interoperable. Wallets need to be trusted by issuers, and relying parties need to be trusted by wallets.


How to Build The Entrepreneurial Spirit

Open time in employees' schedules for creative thinking. Cutting one hour of a redundant meeting for a 10-person team could yield 10 hours of individual employee exploration time. Don't shoot down ideas if employees can't make a strong business case yet. New and creative products lack the market data to back up the innovation. Give people the space and time to experiment and collect evidence. Blocking out time for "innovation days" is also becoming a common strategy. ... Employees also hesitate to take risks when they fear failure will earn them negative feedback or believe that playing it safe is more likely to lead to a promotion. At truly entrepreneurial companies, employees feel confident that risk-taking, within certain bounds, will be accepted and even rewarded. The Indian conglomerate Tata Group, for example, gives a "Dare to Try" award for brave attempts at unsuccessful innovations. ... Encourage employees to reflect on what innovations they might pursue and where their specific talents may be most helpful. Perhaps there's an opportunity to apply existing expertise to a longstanding problem or to bring prior insights to a new domain with an unexpected connection.


Why SAST + DAST can't be enough

These hybrid techniques highlight the fact that the dichotomic approach to application security offered by SAST/DAST is quickly being deprecated. Having two big security staples stretched out over the SDLC is not enough to be able to adapt to the new threats’ categories around software code. In fact, when it comes to hardcoded credentials, a whole new aspect has to be taken into account. ... Where SAST fails to convey the idea of probable “dormant” threats inside the git history, new concepts and methods need to emerge. This is why at GitGuardian we believe secrets detection deserves its very own category, and work towards raising the general awareness around its benefits. Older concepts are too narrow to encompass these new and actively exploited kinds of threats. But that’s not the end of the story: code reviews are a flawed mechanism, too. ... Both are still necessary, but no more sufficient to shield application security from vulnerabilities. Unfortunately, the security landscape is moving at a great pace and the proliferation of intricate concepts makes it sometimes difficult to grasp the definitive scope of action and limitations of some tools.


Should you use BitLocker on Windows?

it's generally good to have BitLocker enabled, especially for fixed drives on your PC. However, if you have drives that move between different PCs, BitLocker may be a problem because it's only available on Windows. That means that not only do you have to use a Windows PC to encrypt your drive, but only Windows PCs can decrypt it, so you won't be able to read your data on a separate computer. BitLocker may also cause some hassle if you're trying to access recovery options on your PC or the computer no longer boots for some reason. In order to access a BitLocker-encrypted drive without your usual password, you'll need to have the recovery key, which you may not always have handy. That's not exactly a problem, but it's something to be aware of. ... So BitLocker is generally good and you should have it enabled, but there's no need to scramble if you haven't done it before. Windows 11 will encrypt the fixed drives on your PC by default, so you're already set on that front unless you want to disable it for one of the reasons mentioned above.


Proposed EU Chat Control law wants permission to scan your WhatsApp messages

The key here is the 'user consent' clause. That's the way to make the scanning of privately shared multimedia files not an obligation but a choice. How they plan to do so resembles more to blackmail, however. As we mentioned, if you want to share a photo, video, or URL with your friend on WhatsApp you must give consent, or just stick to texting, calls, and vocal messages. Commenting on this point, Digneaux said: "There is no consent. There is no choice. If innocent users don’t agree to let the authorities snoop on their messages, emails, photos, and videos they will simply be cut off from the modern world." Proton isn't alone in feeling this way. A group of over 60 organizations—including Proton, Mozilla, Signal, Surfshark, and Tuta, alongside 50+ individuals, signed a joint statement to voice their concerns against the new proposal. Coerced consent is not freely given consent," wrote the group. "If the user has no real choice, feels compelled to consent, or would defacto be barred from the service if they do not consent, then the consent given will not be freely given."


AI Gateways vs. API Gateways: What’s the Difference?

Most organizations today consume AI outputs via a third-party API, either from OpenAI, Hugging Face or one of the cloud hyperscalers. Enterprises that actually build, tune and host their own models also consume them via internal APIs. The AI gateway’s fundamental job is to make it easy for application developers, AI data engineers and operational teams to quickly call up and connect AI APIs to their applications. This works in a similar way to API gateways. That said, there are critical differences between API and AI gateways. For example, the computing requirements of AI applications are very different from computing requirements of traditional applications. Different hardware is required. Training AI models, tuning AI models, adding additional specialized data to them and querying AI models each might have a different performance, latency or bandwidth requirement. The inherent parallelism of deep learning or real-time response requirements of inferencing may call for different ways to distribute AI workloads. Measuring how much an AI system is consuming can also require a specialized understanding of tokens and model efficiency.


Dispelling the disillusion – demystifying the digital twin

While digital twins certainly have a part to play when it comes to building development and construction, historically this has seen some overengineered, one-size-fits-all solutions, which often have not been best suited to the task at hand. This initial strategy raised expectations that the digital twin would solve all the client’s problems, only to ultimately underperform and disappoint. At Aecom, digital twins are developed as an ecosystem of different data sources, brought together harmoniously to provide a solution that prioritises resolving the use case or specific business need – moving away from multipurpose, off-the-rack to a more tailored, quick-time-to-value approach. Achieving value for the end user comes from determining what interface is required to provide the information they need, reducing things to their simplest components to address the use case at hand. By starting light, with a vision and long-term strategy in place, you can continue to build up and iterate your digital ecosystem where you can keep plugging in new technologies and integrating data sources, allowing it to grow and develop over time.


Feds Issue Alerts for Flaws in 2 Baxter Medical Devices

Many currently deployed medical device products in use today simply did not have sufficient security testing from their manufacturers - "full stop," said David Brumley, CEO of security firm ForAllSecure and cybersecurity professor at Carnegie Mellon University. While the Food and Drug Administration has a list of new cybersecurity expectations from manufacturers seeking premarket approval for their new medical devices, that intensified FDA review - empowered by Congress - is less than two years old. "The new FDA guidance is only 'premarket,' meaning it's only for new devices that have not been fielded. Everything out there already deployed hasn't had sufficient security testing, and that's security debt we're seeing catch up with us now," Brumley said. The FDA needs to provide stronger regulatory scrutiny and guidance for "currently fielded devices meeting modern security standards, not just premarket devices," Brumley said. "We also need the FDA to be more prescriptive, not less prescriptive. Putting it on the hospitals is the wrong place; it's like asking you to change how you drive your car while flying down the freeway at 80 miles per hour to fix a vendor issue."



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - June 08, 2024

Understanding Security's New Blind Spot: Shadow Engineering

Shadow engineering leaves security teams with little or no control over LCNC apps that citizen developers can deploy. These apps also bypass the usual code tests designed to flag software vulnerabilities and misconfigurations, which could lead to a breach. This lack of visibility prevents organizations from enforcing policies to keep them in compliance with corporate or industry security standards. ... LCNC apps have many of the same problems found in conventionally developed software, such as hard-coded or default passwords and leaky data. A simple application asking employees for their T-shirt size for a company event could give hackers access to their HR files and protected data. LCNC apps should routinely be evaluated for threats and vulnerabilities, so they can be detected and remediated. ... Give citizen developers guidance in easy-to understand terms to help them remediate risks themselves as quickly and easily as possible. Collaborate with business developers to ensure that security is integrated into the development process of LCNC applications going forward.


‘Technology must augment humanity’: An interview with former IBM CEO Ginni Rometty

While we can't control disruptions, we can control our outlook on the future. Leaders must instill confidence in their teams, emphasising the inevitability of change and the collective ability to find positive solutions. Honesty is a form of optimism, so be honest with yourself and your teams about the issues at hand, resisting attempts to ignore or minimise them. ... Problem-solving is at the core of leadership, so leaders should be unafraid to ask questions, seek insights from others, and involve their teams and wider network in finding solutions. Remember, you do not have to tackle everything alone or have all the answers. When I face a complex problem, I dissect it into manageable pieces and think through each disparate part. ... The right relationships in your life, personal and professional, provide perspective and ideas which is essential for progress. Building a robust network—from friends and family to colleagues and industry peers—provides support and inspiration to maintain optimism and courage amid disruption. The more diverse your network, the more people you can call on to fuel your optimism and courage in the face of disruption.


How Cybersecurity and Sustainability Intersect

Cybersecurity and sustainability are discrete functions in many enterprises, yet they could benefit greatly from being de-siloed. Sustainability and cybersecurity initiatives need C-suite awareness and resources to permeate an enterprise’s culture and actually achieve their goals. “It's not a one-person show anymore. It's really an ownership in that responsibility and a stewardship that cuts across functional leadership across … the entire organization,” says Lynch. In more mature organizations, cybersecurity already has board-level involvement, which can make it easier to see and act on its intersection with sustainability. But for many organizations, cybersecurity and sustainability are separate and even back-office functions. “The cybersecurity leader should not wait for someone to come [and] invite them into these conversations,” says Govindankutty. The stakeholders who need to be involved in cybersecurity and sustainability extend beyond an enterprise’s four walls. Third-party vendors are a vital part of an enterprise’s ecosystem.


Flipping The Script On Startup Success

The first step is to identify the narrowly defined vertical market segments that the company will focus on. The second step is to find a lighthouse customer or two to focus all the team’s attention on to define the minimum viable product (MVP). That is iterative as the customer and the product team go back and forth with features that are must-haves. Then the startup team tests that candidate MVP with a few other customers. ... If you ask any experienced entrepreneur, investor or board member what the most important thing a startup CEO must stay on top of is, it’s to know at all times how much cash they have, what the monthly burn rate is and how long the runway is before cash runs out. Many mistakes are excusable and recoverable, but running out of cash by surprise is neither. ... Culture is not pizza and beer on Fridays, foosball tables or little rooms filled with toys. It is about the values of the company and how they are espoused. It is about the tone the CEO sets and how they communicate with all of their constituents. And the importance of culture is not not just about company morale, although that is very important. It is about attracting and retaining the best talent. While it might be nice to think you can put this off while focusing on the first four things, you would be wrong.


Empowering Developers to Harness Sensor Data for Advanced Analytics

Data from sensors offers a treasure trove of insights from the physical world for data scientists. From tracking temperature fluctuations in a greenhouse to analyzing the vibrations of industrial machines in a manufacturing plant, these tiny devices capture crucial information that can be used for groundbreaking research and development. The journey from collecting raw sensor data to actionable analysis can be riddled with stumbling blocks, as the realities of hardware components and environmental conditions come into play. The typical approach to sensor data capture often involves a cumbersome workflow across the various teams involved, including data scientists and engineers. While data scientists meticulously define sensor requirements and prepare their notebooks to process the information, engineers deal with the complexities of hardware deployment and software updates that reduce the scientists’ ability to quickly adjust these variables on the fly. This creates a long feedback loop that delays the pace of innovation across the organization.


To lead a technology team, immerse yourself in the business first

When asked to rank the defining characteristics of a leading CIO, respondents were split between the conventional and contemporary, saying the traditional, more IT-centric qualities are just as important as the strategic and more customer-focused ones. While aligning tech vision and strategy with the business has been the role of CIOs and technology leaders for some time, the scope of their duties now extends deeper into the business itself. "Establishing and managing a tech vision isn't enough," said DiLorenzo. "Today's CIOs need to own all the various technology uses across their organizations and ensure they're actively coordinating and orchestrating their fellow tech leaders -- as well as their business peers -- to co-create a vision and tech strategy that aligns with, and furthers, the overall enterprise strategy." Getting to a leadership position also requires immersing oneself in the business, Shaikh advised. "Business acumen, which includes understanding various business functions and industry dynamics, can be cultivated by spending time in business units," she said. "This understanding is crucial for strategic thinking, to help identify opportunities where technology can impact goals."


The unseen gen AI revolution on the AI PC and the edge

The shift towards edge and PC-based AI is not without its challenges. Privacy and security concerns are paramount, as devices become more autonomous and capable of processing sensitive data. Companies must focus on privacy and AI ethics to be the cornerstone of their approach, ensuring that as AI becomes more integrated into our devices, it does so in a manner that respects user privacy and trust. Moreover, the energy efficiency of AI workloads is a critical consideration, especially for battery-powered devices. Advancements in low-power, high-performance processors are pivotal in addressing this challenge, ensuring that the benefits of gen AI are not offset by decreased device longevity or increased environmental impact. Intel’s OpenVINO toolkit further enhances these benefits by optimizing deep learning models for fast, efficient performance across Intel’s hardware portfolio. This optimization enables customers to deploy AI applications more widely, even in resource-constrained environments, without sacrificing performance. As we enter this new era, the way we think about gen AI and how we engage with it will continue to change. 


Enhancing Cloud Security in Response to Growing Digital Threats

Security challenges are unique to hybrid cloud environments where public clouds combine with on-premises infrastructure. Secure migration tools and techniques are vital to prevent data leaks or unauthorized access. Encrypt data before transferring and place controls on both ends during migration to reduce associated risks. Network segmentation in hybrid cloud environments requires thorough interconnectivity planning. Carefully configure firewall connections, firewalls, and network access controls to ensure only authorized traffic flows between on-premises resources and those hosted within the cloud. Visibility across hybrid cloud environments requires centralized monitoring to enhance threat detection capability. SIEM solutions can collect security logs from both on-premises and cloud systems, helping provide a unified view of an enterprise’s security posture. The more organizations embrace cloud computing, the more preparation for emerging trends is required. Zero-trust security models, which allow continuous authentication and authorization regardless of the device or location, are increasingly popular.


Ethical Issues in Information Technology (IT)

Establishing ethical IT practices is also important because people’s trust in the tech industry chips away each time they learn about unethical practices, especially in the wake of reports on data usage by companies such as Facebook and Google. “If companies don’t have ethical IT practices in place, they’re going to lose the trust of their customers and clients,” says Ferebee. “IT professionals need to take it seriously. They also need to let the public know they take it seriously so the public feels safe using their products and services.” Whether or not you’re in a leadership position, it is important to lead by example when it comes to ethics in IT. “People are often afraid to speak up because they’re concerned with the repercussions,” says Ferebee. “But when it comes to ethics in IT, you need to speak up — lead by example, advocate for it, and talk about it all the time. That could include reporting ethical issues, sourcing or creating and then implementing ethics training, and developing internal frameworks for your IT department. You don’t have to be the director of IT to start implementing this.”


Establishing Trust in AI Systems: 5 Best Practices for Better Governance

Security culture drives both behaviors and beliefs. A security-first organization promotes information sharing, transparency, and collaboration. When risks are discovered, or when issues occur, communication should be immediate and designed to clearly convey to employees how their behaviors and actions can both support and detract from security efforts. Enlist employees in these efforts by ensuring that your culture is positive and supportive. ... Security culture does not exist in a vacuum and does not evolve in a silo. Input from a wide range of stakeholders—from employees to customers and partners, regulators and the board—is critical for ensuring that you understand how AI is enabling efficiencies, and where risks may be emerging. ... By seeking input from key constituents in an open and transparent manner, they will be more likely to share their concerns and help uncover potential risks while there’s still time to adequately address those risks. Acknowledge and respond to feedback promptly and highlight the positive impacts of that feedback.Tackling third-party risks



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - June 07, 2024

Technology, Regulations Can't Save Orgs From Deepfake Harm

Deepfakes have already become a tool for attackers behind business-leader impersonation fraud — in the past referred to as business email compromise (BEC) — where AI-generated audio and video of a corporate executive are used to fool lower-level employees into transferring money or taking other sensitive actions. In an incident disclosed in February, for example, a Hong Kong-based employee of a multinational corporation transferred about $25.5 million after attackers used deepfakes during a conference call to instruct the worker to make the transfers. ... Creating trusted channels of communication should be a priority for all companies, and not just for sensitive processes — such as initiating a payment or transfer — but also for communications to the public, says Deep Instinct's Froggett. "The best companies are already preparing, trying to think of the eventualities. ... You need legal, regulatory, and compliance groups — obviously, marketing and communication — to be able to mobilize to combat any misinformation," he says. 


Juniper Networks brings industry’s first and only AIOps to WAN routing, delivering AI-native insight for exceptional experiences

Juniper is introducing a new security insights Mist dashboard within its Premium Analytics product to provide comprehensive security event visibility and persona-based policy activation and threat responses. This increased visibility provides actionable intelligence to security teams, enabling them to quickly identify incidents and respond to threats in real-time—thereby improving the user experience. The security insights dashboard in Premium Analytics also helps break down siloed network and security management. ... Another innovation announced by Juniper, Routing Assurance, brings the company’s high performance, sustainable and versatile enterprise edge routing platforms under the Mist AI and cloud umbrella. ... In addition, Marvis, the industry’s first and only AI-Native VNA with a conversational interface built on more than seven years of learning, has been expanded to cover enterprise WAN edge routing. With Marvis’ conversational interface, IT teams can use simple language queries to identify and fix routing issues, including knowledge base queries powered by Generative AI.


How Sprinting Slows You Down: A Better Way to Build Software

First, start by killing the deadlines. In our model, engineers determine when a feature is ready to ship. They are thus able to make principled engineering decisions about what to implement now versus later, delivering better code than they would when making decisions driven by a two-week deadline. Second, assign smaller teams to features and give them greater scope. Because the teams are smaller (often just one engineer!), many new features are developed in parallel. These solo programmers or small teams own the entirety of implementation from back to front. There are no daily standups and needless communication is eliminated. And because the engineers control the implementation across the stack, they can make principled engineering decisions about how to build their functionality, rather than decisions constrained by the sliver of the codebase they happen to own, delivering a more cohesive implementation. The common thread between these two ideas is that they institutionally support making principled decisions, because good decisions today lead to better outcomes tomorrow. 


Why is site selection so important for the data center industry?

Climate considerations are paramount, with weather conditions impacting hazard exposure and vulnerability. Mitigating natural hazards such as floods, earthquakes, and hurricanes through engineered solutions is essential. Access to major highways and airports ensures logistical efficiency, particularly during construction and operation. The air quality surrounding a site affects equipment performance and employee health, necessitating measures to mitigate pollution. Historical data on natural disasters informs risk management strategies and facility design. Ground conditions must undergo thorough geotechnical investigation to assess structural stability and suitability for construction. The availability of robust communications infrastructure, particularly fiber-optic networks, is critical for seamless connectivity. Low latency, enabled by proximity to subsea cable landing sites and dense fiber networks, is imperative for high-performance applications. Geopolitical stability, regulatory environments, and taxation policies influence site selection decisions. Electrical power availability and cost significantly impact operational expenses, with renewable resources offering sustainability benefits.


Maximizing SaaS application analytics value with AI

AI analytics tools offer businesses the opportunity to optimize conversion rates, whether through form submissions, purchases, sign-ups or subscriptions. AI-based analytics programs can automate funnel analyses (which identify where in the conversion funnel users drop off), A/B tests (where developers test multiple design elements, features or conversion paths to see which performs better) and call-to-action button optimization to increase conversions. Data insights from AI and ML also help improve product marketing and increase overall app profitability, both vital components to maintaining SaaS applications. Companies can use AI to automate tedious marketing tasks (such as lead generation and ad targeting), maximizing both advertising ROI and conversation rates. And with ML features, developers can track user activity to more accurately segment and sell products to the user base. ... Managing IT infrastructure can be an expensive undertaking, especially for an enterprise running a large network of cloud-native applications. AI and ML features help minimize cloud expenditures by automating SaaS process responsibilities and streamlining workflows.


Inside the 'Secure By Design' Revolution

While not legally binding, the pledge encourages those that sign up to show demonstrable progress in each of the seven goals within a year. “One thing that we like, and I think a lot of industry likes, is it allows for flexibility in showing how you meet those goals,” Charley Snyder, head of security policy at Google, tells InformationWeek. If pledge signers are unable to show progress within a year, CISA encourages them to communicate what steps they did take and share what challenges they faced. The agency plans to offer its support throughout the year. “We are going to be working very closely with the pledge signers to help make progress on these pledge goals,” Zabierek explains. “We worked collaboratively with industry to develop the actions, and we're going to maintain that collaboration.” ... Tidelift, a company that partners with open-source maintainers, is not only applying the principles outlined in the pledge to its own software, but it also published an update on the ways it is working to help open-source maintainers achieve the pledge goals.


The next frontier: AI, VR, and the future of educational assessment

One of the most promising applications of AI in assessment is its ability to analyze vast amounts of data to identify patterns and trends in student performance, enabling educators to gain valuable insights into student progress and learning outcomes. By harnessing AI-powered analytics, educators can track student achievement over time, identify areas for improvement, and tailor instruction to address individual learning needs more effectively. ... In addition to AI, Virtual Reality (VR) is revolutionising the assessment landscape by offering immersive and interactive experiences that allows students to engage with content in three-dimensional, multisensory environments, providing opportunities for experiential learning and authentic assessment experiences. Furthermore, VR technology enables educators to assess higher-order thinking skills such as problem-solving, critical thinking, and creativity in ways that are not feasible with traditional assessment methods. Through VR-based scenarios and simulations, students can engage in complex, real-world challenges, make decisions, and experience the consequences of their actions


Cyber insurance isn’t the answer for ransom payments

Contrary to the belief that having cyber insurance increases the likelihood of ransom payments, Veeam’s research indicates otherwise. Despite only a minority of organizations possessing a policy to pay, 81% opted to do so. Interestingly, 65% paid with insurance and another 21% had insurance but chose to pay without making a claim. This implies that in 2023, 86% of organizations had insurance coverage that could have been utilized for a cyber event. The ransoms paid averages to be only 32% of the overall financial impact to an organization post-attack. Moreover, cyber insurance will not cover the entirety of the total costs associated with an attack. Only 62% of the overall impact is in some way reclaimable through insurance or other means, with everything else going against the organization’s bottom-dollar budget. ... Alarmingly, 63% of organizations are at risk of reintroducing infections while recovering from ransomware attacks or significant IT disasters. Pressured to restore IT operations quickly and influenced by executives, many organizations skip vital steps, such as rescanning data in quarantine, causing the likelihood of IT teams to inadvertently restore infected data or malware.


Generative AI agents will revolutionize AI architecture

AI agents possess advanced natural language processing (NLP) capabilities. They can comprehend, interpret, and generate human language, facilitating easy interaction and communication with users and other systems. These agents also work alongside other AI agents or human operators in collaborative and iterative workflows. Through continuous learning and feedback, they refine their outputs and improve overall performance. On paper, AI agents should be in wide use today. Look at all the pros I’ve listed. The downsides are much more difficult to understand. Even though you need tools to build AI agents, the tools are all over the place regarding what they are and how to use them. Don’t let vendors tell you otherwise. First, these are complex beasties to write and deploy. Architects who can design AI agents and developers who can effectively build AI agents are few and far between. I’ve witnessed teams announce they will use agent-based technology and then build something that falls far short of a solution for the proposed business case. Second, you can’t put much into these AI agents or they are no longer agents. You missed the point if your AI agents are vast clusters of GPUs. 


AI in Healthcare: Bridging the Gap Between Proof and Practice

“We see huge social impacts from AI in healthcare – in the data we’ve collected regionally in Pennsylvania, for example,” Dr. Sadeghian added. “Many rural areas have insufficient access to medical procedures. AI will impact society through both safety and convenience. Everybody has smartphones now; why not have the doctor in your hand? A cultural shift is underway.” AI can give a preliminary screening and keep people out of cities and congested areas, bringing access to more rural areas and saving office visits for people who need them. This also impacts transportation, walkability, and other aspects of civic planning – even pollution mitigation. Inviting the black box of AI into healthcare isn’t some hazy dream. It’s happening today. Younger generations are the most scientifically engaged ever, though, which means consensus-building on tech policy could move faster going forward. Politicians have noticed the social, cultural, and economic value of investing in science, technology, engineering, and mathematics education. 



Quote for the day:

"If you don't value your time, neither will others. Stop giving away your time and talents- start charging for it." -- Kim Garst

Daily Tech Digest - June 06, 2024

How AI will kill the smartphone

The great thing about AI is that it’s software-upgradable. When you buy an AI phone, the phone gets better mainly through software updates, not hardware updates. ... As we’re talking back and forth with AI agents, people will use earbuds and, increasingly, AI glasses to interact with AI chatbots. The glasses will use built-in cameras for photo and video multimodal AI input. As glasses become the main interface, the user experience will likely improve more with better glasses (not better phones), with improved light engines, speakers, microphones, batteries, lenses, and antennas. With the inevitable and inexorable miniaturization of everything, eventually a new class of AI glasses will emerge that won’t need wireless tethering to a smartphone at all, and will contain all the elements of a smartphone in the glasses themselves. ... Glasses will prove to be the winning device, because glasses can position speakers within an inch of the ears, hands-free microphones within four inches of the mouth and, the best part, screens directly in front of the eyes. Glasses can be worn all day, every day, without anything physically in the ear canal. In fact, roughly 4 billion people already wear glasses every day.


Million Dollar Lines of Code - An Engineering Perspective on Cloud Cost Optimization

Storage is still cheap. We should really still be thinking about storage as being pretty cheap. Calling APIs costs money. It's always going to cost money. In fact, you should accept that anything you do in the cloud costs money. It might not be a lot; it might be a few pennies. It might be a few fractions of pennies, but it costs money. It would be best to consider that before you call an API. The cloud has given us practically infinite scale, however, I have not yet found an infinite wallet. We have a system design constraint that no one seems to be focusing on during design, development, and deployment. What's the important takeaway from this? Should we now layer one more thing on top of what it means to be a software developer in the cloud these days? I've been thinking about this for a long time, but the idea of adding one more thing to worry about sounds pretty painful. Do we want all of our engineers agonizing over the cost of their code? Even in this new cloud world, the following quote from Donald Knuth is as true as ever.


The five-stage journey organizations take to achieve AI maturity

We are far from seeing most organizations fully versed in and comfortable with AI as part of their company strategy. However, Asana and Anthropic have outlined five stages of AI maturity; a guide executives can use to gauge where their company stands in implementing real transformative outcomes. Many respondents say they’re in either the first or second stage. Only seven percent claim they’ve achieved the highest stage. ... Asana and Anthropic conclude that boosting comprehension is important, offering resources, training programs and support structures for knowledge workers to improve their education. Companies must also prioritize AI safety and reliability, meaning that AI vendors should be selected with “complete, integrated data models and invest in high-quality data pipelines and robust governance practices.” AI responses must be interpretable to facilitate decision-making and should always be controlled and directed by human operators. Other elements of organizations in Stage 5 include embracing a human-centered approach, developing strong comprehensive policies and principles to navigate AI adoption responsibly, and being able to measure AI’s impact and value


Unauthorized AI is eating your company data, thanks to your employees

A major problem with shadow AI is that users don’t read the privacy policy or terms of use before shoveling company data into unauthorized tools, she says. “Where that data goes, how it’s being stored, and what it may be used for in the future is still not very transparent,” she says. “What most everyday business users don’t necessarily understand is that these open AI technologies, the ones from a whole host of different companies that you can use in your browser, actually feed themselves off of the data that they’re ingesting.” ... Using AI, even officially licensed ones, means organizations need to have good data management practices in place, Simberkoff adds. An organization’s access controls need to limit employees from seeing sensitive information not necessary for them to do their jobs, she says, and longstanding security and privacy best practices still apply in the age of AI. Rolling out an AI, with its constant ingestion of data, is a stress test of a company’s security and privacy plans, she says. “This has become my mantra: AI is either the best friend or the worst enemy of a security or privacy officer,” she adds. “It really does drive home everything that has been a best practice for 20 years.”


How a data exchange platform eases data integration

As our software-powered world becomes more and more data-driven, unlocking and unblocking the coming decades of innovation hinges on data: how we collect it, exchange it, consolidate it, and use it. In a way, the speed, ease, and accuracy of data exchange has become the new Moore’s law. Safely and efficiently importing a myriad of data file types from thousands or even millions of different unmanaged external sources is a pervasive, growing problem. ... Data exchange and import solutions are designed to work seamlessly alongside traditional integration solutions. ETL tools integrate structured systems and databases and manage the ongoing transfer and synchronization of data records between these systems. Adding a solution for data-file exchange next to an ETL tool enables teams to facilitate the seamless import and exchange of variable unmanaged data files. The data exchange and ETL systems can be implemented on separate, independent, and parallel tracks, or so that the data-file exchange solution feeds the restructured, cleaned, and validated data into the ETL tool for further consolidation in downstream enterprise systems.


AI is used to detect threats by rapidly generating data that mimics realistic cyber threats

When we talk about AI, it’s essential to understand its fundamental workings—it operates based on the data it’s fed. Hence, the data input is crucial; it needs to be properly curated. Firstly, ensuring anonymisation is key; live customer data should never be directly integrated into the model to comply with regulatory standards. Secondly, regulatory compliance is paramount. We must ensure that the data we feed into the framework adheres to all relevant regulations. Lastly, many organisations grapple with outdated legacy tech stacks. It’s essential to modernise and streamline these systems to align with the requirements of contemporary AI technology. Also, mitigating bias in AI is crucial. Since the data we use is created by humans, biases can inadvertently seep into the algorithms. Addressing this issue requires careful consideration and proactive measures to ensure fairness and impartiality. ... It’s important for people to be highly aware of biases and misconceptions surrounding AI. We need to be conscious of the potential biases in AI systems. 


Tackling Information Overload in the Age of AI

The reason this story is so universal is that the kind of information that drives knowledge-intensive workflows is unstructured data, which has stubbornly resisted the automation wave that has taken on so many other enterprise workflows using software and software-as-a-service (SaaS). SaaS has empowered teams with tools they can use to efficiently manage a wide variety of workflows involving structured data. However, SaaS offerings have been unable to take on the core “jobs to be done” in the knowledge-intensive enterprise because they can’t read and understand unstructured data. They aren’t capable of performing human-like services with autonomous decision-making abilities. As a result, knowledge workers are still stuck doing a lot of monotonous and undifferentiated data work. However, newly available large language models (LLMs) and generative AI excel at processing and extracting meaning from unstructured data. LLM-powered “AI agents” can perform services such as reading and summarizing content and prioritizing work and can automate multistage knowledge workflows autonomously.


CDOs Should Understand Business Strategy to Be Outcome-focused

To be outcome-focused, the CDO has to prioritize understanding the business or corporate strategy, he says. In addition, one needs to comprehend the organizational aspirations and how to deliver on key business outcomes, which could include monetization of commercial opportunities, risk mitigation, cost savings, or providing client value. Next, Thakur advises leaders to focus on the foundational data and analytic capabilities to drive business outcomes. There must be a well-organized data and analytic strategy to start with, a good tech stack, an analytic environment, data management, and governance principles. While delivering on some of the use cases may take time, it is imperative to have quick wins along the way, says Thakur. He recommends CDOs create reusable data products and assets while having an agile operationalization process. Then, Thakur suggests data leaders create a solid engagement model to ensure that the data analytics team is in sync with business and product owners. He urges leaders to put an effective ideation and opportunity management framework into action to capture business ideas and prioritize use cases.


Besides the traditional functions of sales and finance, there is a growing demand for tech-driven talent in the sector. It’s important to note that the demand for technology expertise isn’t limited to software development but encompasses different competencies, such as cybersecurity, UI/UX development, AI/ML engineering, digital marketing and data analytics. This is expected as there has been an increase in the use of AI and ML in the BFSI landscape, most prominently in fraud detection, KYC verification, sales and marketing processes. ... Now, new-age competencies such as digital skills, data analysis, AI and cybersecurity are increasingly becoming part of these programmes. To meet the growing demand for specialised skills and roles, many BFSI organisations encourage employees with financial expertise to develop digital skills that enable them to work more efficiently. ... At the crossroads of significant industry-level transformations, employers expect a variety of soft skills in addition to technical competencies. 


Cyber Resilience Act Bans Products with Known Vulnerabilities

In future, manufacturers will no longer be allowed to place smart products with known security vulnerabilities on the EU market – if they do, they could face severe penalties ... When it comes to cyber resilience, the legislation of the Cyber Resilience Act makes it clear that customers – both residential and commercial – have an effective right to secure software. However, the race to be the first to discover vulnerabilities continues: organisations would be well advised to implement both effective CVE detection and impact assessment now to better scrutinise their own products and protect themselves against the serious consequences of vulnerability scenarios. “The CRA requires all vendors to perform mandatory testing, monitoring and documentation of the cybersecurity of their products, including testing for unknown vulnerabilities known as ‘zero days’,” said Jan Wendenburg, CEO of ONEKEY, a cybersecurity company based in Duesseldorf, Germany. ... Many manufacturers and distributors are not sufficiently aware of potential vulnerabilities in their own products. 



Quote for the day:

"Life always begins with one step outside of your comfort zone." -- Shannon L. Alder