Daily Tech Digest - June 28, 2023

The digital future may rely on ultrafast optical electronics and computers

Our system of ultrafast optical data transmission is based on light rather than voltage. Our research group is one of many working with optical communication at the transistor level – the building blocks of modern processors – to get around the current limitations with silicon. Our system controls reflected light to transmit information. When light shines on a piece of glass, most of it passes through, though a little bit might reflect. That is what you experience as glare when driving toward sunlight or looking through a window. We use two laser beams transmitted from two sources passing through the same piece of glass. One beam is constant, but its transmission through the glass is controlled by the second beam. By using the second beam to shift the properties of the glass from transparent to reflective, we can start and stop the transmission of the constant beam, switching the optical signal from on to off and back again very quickly. With this method, we can switch the glass properties much more quickly than current systems can send electrons. So we can send many more on and off signals – zeros and ones – in less time.


SEO Poisoning Attacks on Healthcare Sector Rising, HHS Warns

Some threat actors also use targeted types of SEO poisoning, including spear-phishing, to go after specific users, such as IT administrators and other privileged users. "The technique enables attackers to target and customize their attacks to specific audiences, making them more challenging to identify and defend against," HHS HC3 wrote. Common SEO poisoning methods also include typosquatting, which targets users who might open their browser and input a website address that has an inadvertent typo or click on a link with a misspelled URL, HHS HC said. Attackers often register domain names that are similar to legitimate ones but contain minor spelling errors. ... Another tactic is cloaking, which involves displaying search engine crawlers with different material than what is presented to the user when the link is clicked; manipulating search ranking by artificially increasing a website's click-through rate to boost its ranking in search engine; and using private link networks, which involves connecting a group of unrelated websites resulting in a network of backlinks to a main website.


Democratizing data to fuel data-driven business decisions

Gone are the days when all an organization’s data lived in just one place. Business and technical users alike must be able to fully leverage data that spans cloud, distributed, and mainframe infrastructures. Because of this, effective data intelligence tools are those that can be applied at scale and support various technology connections to successfully plug and play into large organizations’ complex environments. This is increasingly critical as a growing number of organizations turn to hybrid solutions to leverage the benefits of both the cloud and the mainframe. According to a recent Rocket Software survey, an overwhelming 93% of respondents strongly agree with the sentence, “I believe my organization needs to embrace a hybrid infrastructure model that spans from mainframe to cloud.” Today’s data intelligence tools need to remove the barriers that prevent organizations from leveraging their data assets to the fullest. From being able to find and use the right data to increasing data use and trust, maintaining a competitive edge requires organizations to leverage trusted data to make informed and strategic business decisions.


I'm done with Red Hat (Enterprise Linux)

This past week, Red Hat took that knife and twisted it hard, when they published this blog post. Let there be no mistake: this was meant to destroy the distributions the community built to replace what Red Hat took away. There were only two things that kept me around after Red Hat betrayed us the first time: First, instead of attacking the community of open source users, many Red Hatters reached out and asked, "How can we do better?" It didn't heal the wound, but it meant something, knowing someone at Red Hat would at least listen. Second—and more importantly—Rocky Linux and AlmaLinux stepped in. They prevented a mass-exodus from the Red Hat ecosystem, giving developers like me a stable target for my open source work. But Rocky and Alma relied on Red Hat sharing their source code. Here's how it used to work:Red Hat would grab a copy of Linux They would add magic sauce that makes it Red Hat Enterprise Linux; They would release a new version; They would update a source code repository with all the data required to build it from scratch


What’s Next for JavaScript: New Features to Look Forward to

TypeScript was developed to make JavaScript developers more productive, rather than to replace JavaScript, but it’s also been a source of improvements to the language. Currently, you use TypeScript to make types explicit in your code while you’re writing it — but then you remove them when your code runs. Still some way off, the stage 1 Type Annotations proposal for including type information in JavaScript code but having them treated as comments by JavaScript engines is important, because it converges TypeScript and JavaScript for consistency in a way that keeps them aligned, but also makes it clear that they’re working at different layers. Developers can use first class syntax for types, whether that’s TypeScript or Flow syntax with long JSDoc comment blocks, and know that their code is still compatible with JavaScript engines and JavaScript tooling — avoiding the complexity of needing a build step to erase the types before their code will run, Palmer pointed out. “There’s huge value just in having static types that only exist during development and are fully erased during runtime,” he explained.


Human brain-inspired computer memory design set to increase energy efficiency and performance

The researchers focused on hafnium oxide, an insulating material commonly used in the semiconductor industry. However, there was one significant obstacle to overcome: hafnium oxide lacks structure at the atomic level, making it unsuitable for memory applications. But the team found an ingenious solution by introducing barium into thin films of hafnium oxide, resulting in the formation of unique structures within the composite material. These novel structures, known as vertical barium-rich "bridges," allowed electrons to pass through while the surrounding hafnium oxide remained unstructured. At the points where these bridges met the device contacts, an adjustable energy barrier was created. This barrier influenced the electrical resistance of the composite material and enabled multiple states to exist within it. ... One remarkable aspect of this breakthrough is that the hafnium oxide composites are self-assembled at low temperatures, unlike other composite materials that require expensive high-temperature manufacturing methods.


Enhancing Security With Data Management Software Solutions

Implementing a data management platform offers a multitude of advantages. Foremost, it ensures heightened data integrity, furnishing dependable and accurate information crucial for making well-informed decisions. Furthermore, it mitigates the risk of data breaches, shielding the reputation of your business and preserving the trust of your customers. Lastly, it streamlines regulatory compliance, which proves invaluable considering the stringent data regulations prevalent in numerous jurisdictions. This feature serves as a lifeline, aiding in the avoidance of potential legal entanglements and financial repercussions that may arise from non-compliance. By embracing a comprehensive data management platform, businesses can enjoy the assurance of data accuracy, fortify their security measures, and navigate the complex landscape of regulatory requirements with ease, ultimately fostering growth, resilience, and long-term success. When confronted with a myriad of data management solutions, selecting the ideal one for your business requires careful consideration. 


Wi-Fi 7 is coming — here's what to know

Here's how the Wi-Fi Alliance explains the upcoming standard: “Based on the developing IEEE 802.11be standard, Wi-Fi 7 will be the next major generational Wi-Fi technology evolution. Wi-Fi 7 focuses on physical (PHY) and medium access control (MAC) improvements capable of supporting a maximum throughput of at least 30Gbps to increase performance, enable Wi-Fi innovations, and expand use cases. Additional Wi-Fi 7 enhancements will support reduced latency and jitter for time sensitive networking applications including AR/VR, 4K and 8K video streaming, automotive, cloud computing, gaming, and video applications, as well as mission critical and industrial applications. As with other Wi-Fi generations, Wi-Fi 7 will be backward compatible and coexist with legacy devices in the 2.4, 5, and 6 GHz spectrum bands.” The alliance promises peak data rates of 46Gbps, which is almost four times faster than Wi-Fi 6 (802.11ax) and 6E and five times faster than Wi-Fi 5 (802.11ac). Wi-Fi 7 is also known as IEEE 802.11be Extremely High Throughput (EHT). It works in the 2.4GHz, 5GHz, and 6Ghz bands.


Hackers Targeting Linux and IoT Devices for Cryptomining

These bots also are instructed to download and execute additional scripts to brute-force every host in the hacked device's subnet and backdoor and any vulnerable systems using the Trojanized OpenSSH package. The bots' purpose is to maintain persistence and deploy mining malware crafted for Hiveon OS systems, which are Linux-based open-source operating systems designed for cryptomining. Microsoft attributed the campaign to a user named "asterzeu" on the cardingforum.cx hacking forum. The user offered multiple tools for sale on the platform, including an SSH backdoor, Microsoft said. Microsoft's disclosure comes two days after a report on a similar campaign was published by the AhnLab Security Emergency Response Center. The attack campaign consists of the Tsunami - another name for Kaiten - DDoS bot being installed on inadequately managed Linux SSH servers, the report said. As observed in Microsoft's analysis, Tsunami also installed various other malware and cryptominer and obfuscation tools, such as ShellBot, XMRig CoinMiner and Log Cleaner.


Most popular generative AI projects on GitHub are the least secure

The OpenSSF Scorecard is a tool created by the OpenSSF to assess the security of open-source projects and help improve them. The metrics it bases the assessment on are different facts about the repository such as the number of vulnerabilities it has, how often it's maintained, and if it contains binary files. By running Scorecard on a project, different parts of its software supply chain will be checked, including the source code, build dependencies, testing, and project maintenance. The purpose of the checks is to ensure adherence to security best practices and industry standards. Each check has a risk level associated with it, representing the estimated risk associated with not adhering to a specific best practice. Individual check scores are then compiled into a single aggregate score to gauge the overall security posture of a project. Currently, there are 18 checks that can be divided into three themes: holistic security practices, source code risk assessment, and build process risk assessment. The Scorecard assigns an ordinal scale between 0 to 10 and a risk level score for each check.



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - June 27, 2023

The unhappy reality of cloud security in 2023

Configuration problems are often the most significant risk to cloud data and the most often overlooked. Show me a breach, and I’ll show you something stupid that allowed it to happen. One recent example is a large car manufacturer that had more than two million customers’ data exposed due to misconfigurations in its cloud storage systems. Rarely are properly configured security systems bypassed to gain access to data. Often, storage systems are left exposed or databases need more encryption. ... Not only are APIs provided by the cloud vendors, APIs are also built into business applications. They provide “keys to the kingdom” and are often left as open access points to business data. Other emerging threats include the use of generative AI systems to automate fakery. As I covered here, these AI-driven attacks are occurring now. As bad actors get better at leveraging AI systems (often free cloud services), we’ll see automated attacks that can work around even the most sophisticated security systems. It will be tough to keep up with the new and innovative ways attacks can occur.


How Automation Enables Remote Work

In a remote work setup, effective communication is paramount. Automation tools such as Slack and Microsoft Teams facilitate better communication by automating tasks like scheduling meetings, sending reminders and translating messages. These tools can also automate the process of organizing and archiving conversations, making it easier to retrieve information when needed. Additionally, they can automate the process of updating team members about project changes or important announcements. These features ensure that all team members are on the same page, enhancing collaboration, reducing the chances of miscommunication and ultimately leading to a more cohesive and efficient team. Automation in human resources (HR) is a game-changer in remote work settings. HR automation software can streamline recruitment, automating resume sorting, interview scheduling and follow-up emails. It can also enhance onboarding, automating welcome emails and account setups. Performance management can be improved with automated feedback collection and goal tracking. 


Self-healing code is the future of software development

It’s easy to imagine a more iterative process that would tap in the power of multi-step prompting and chain of thought reasoning, techniques that research has shown can vastly improve the quality and accuracy of an LLM’s output. An AI system might review a question, suggest tweaks to the title for legibility, and offer ideas for how to better format code in the body of the question, plus a few extra tags at the end to improve categorization. Another system, the reviewer, would take a look at the updated question and assign it a score. If it passes a certain threshold, it can be returned to the user for review. If it doesn’t, the system takes another pass, improving on its earlier suggestions and then resubmitting its output for approval. We are lucky to be able to work with colleagues at Prosus, many of whom have decades of experience in the field of machine learning. I chatted recently with Zulkuf Genc, Head of Data Science at Prosus AI. He has focused on Natural Language Processing (NLP) in the past, co-developing an LLM-based model to analyze financial sentiment, FinBert, that remains one of the most popular models at HuggingFace in its category.


Why an ecosystem helps IT projects move forward

To support the data strategy set by the company’s chief data officer, the team needed to specify the capabilities required from a data platform with the company’s tech strategy, which is about being cloud-first. Stuart Toll, senior enterprise architect at LGIM, said that time to market, integration time and skills were among the criteria used to assess the data platform providers. For Toll, while LGIM could have probably made any data platform work, he said “we are an asset management firm”. “We buy where we can and only build to differentiate.” This influenced the company’s data integration strategy. LGIM did not want to be in the business of stitching lots of tools together, as Matt Bannock, head of data engineering at LGIM, explained. ... Bannock said that with some tools, IT departments need to spend time on data integration. “Being able to just start working with the data, start running the calculation and start generating the output is much more valuable to us than the potential half a percent advantage we could achieve if we created our own ecosystem,” he said. “There’s a lot of benefit in buying into an ecosystem.”


Key Considerations When Hiring a Chief Information Security Officer

Look for candidates who possess a deep understanding of cybersecurity technologies, risk management frameworks, and regulatory compliance. Experience in managing security incidents, implementing security controls, and developing effective security strategies is also crucial. ... A CISO must understand the business landscape in which the organization operates. They should align security objectives with overall business goals and demonstrate a keen understanding of the organization’s risk appetite. A CISO with business acumen can effectively prioritize security investments, articulate the value of security measures to executive management, and build a security program that supports the organization’s strategic objectives. ... The field of cybersecurity is ever-evolving, with new threats emerging regularly. It is crucial for a CISO to stay up-to-date with the latest trends, technologies, and best practices in information security. Look for candidates who demonstrate a commitment to continuous learning, involvement in industry forums, and participation in relevant certifications and conferences.


10 things every CISO needs to know about identity and access management (IAM)

CISOs must consider how to move away from passwords and adopt a zero-trust approach to identity security. Gartner predicts that by 2025, 50% of the workforce and 20% of customer authentication transactions will be passwordless. ... Identity threat detection and response (ITDR) tools reduce risks and can improve and harden security configurations continually. They can also find and fix configuration vulnerabilities in the IAM infrastructure; detect attacks; and recommend fixes. By deploying ITDR to protect IAM systems and repositories, including Active Directory (AD), enterprises are improving their security postures and reducing the risk of an IAM infrastructure breach. ... Attackers are using generative AI to sharpen their attacks on the gaps between IAM, PAM and endpoints. CrowdStrike’s Sentonas says his company continues to focus on this area, seeing it as central to the future of endpoint security. Ninety-eight percent of enterprises confirmed that the number of identities they manage is exponentially increasing, and 84% of enterprises have been victims of an identity-related breach.


Decentralized Storage: The Path to Ultimate Data Privacy and Ownership

A move towards decentralization opens up a significant possibility for individual users; monetization. Data sovereignty would allow users to monetize their data and available storage space. Contributing towards the network storage would allow users to earn passive income purely from allowing other users to store data on their drives. This could be an alarming concept for users at first. But the realization that only you could access your data on the network using your key, regardless of which node it is stored in, should significantly help overcome this fear. Decentralization also has important implications for businesses and organizations. For example, companies can reduce the risks associated with data breaches and protect customer information more effectively, allowing for more trust with customers in the long term. Organizations could also contribute to network storage on a larger scale, allowing for new economic opportunities.


Too Much JavaScript? Why the Frontend Needs to Build Better

Often, it boils down to one common problem: Too much client-side JavaScript. This is not a cost-free error. One retailer realized they were losing $700,000 a year per kilobyte of JavaScript, Russell said. “You may be losing all of the users who don’t have those devices because the experience is so bad,” he said. That doesn’t mean developers are wrong to ship client-side JavaScript, which is why Russell hates to be prescriptive about how to handle the problem. Sometimes, it makes sense depending on the data model and whether it has to live on the client so that you can access the next email (think Gmail) or paint the next operation quickly (think Figma). The usage tends to correlate with very long sessions, he said. But developers should realize, too, that some frameworks prioritize this approach. “The premise of something like React, the premise of something like Angular, is that [the] data model is local,” he said. “So if that premise doesn’t meet the use case, then those tools just fundamentally don’t make sense,” he said. “You really do have to shoehorn them in for some kind of an exogenous reason, then you hope that it plays out.


The hardest part of building software is not coding, it’s requirements

Is the idea behind using AI to create software to just let those same stakeholders talk directly to a computer to create a SMS based survey? Is AI going to ask probing questions about how to handle all the possible issues of collecting survey data via SMS? Is it going to account for all the things that we as human beings might do incorrectly along the way and how to handle those missteps? In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it. There are times when I’m writing software just for myself where I don’t realize some of the difficulties and challenges until I actually start writing code. Over the past decade, the software industry has transitioned from the waterfall methodology to agile. ... So many software projects using waterfall have failed because the stakeholders thought they knew what they wanted and thought they could accurately describe it and document it, only to be very disappointed when the final product was delivered. Agile software development is supposed to be an antidote to this process.


Beyond Backups: Evolving Strategies for Data Management and Security

As businesses continue to generate more data, the need to revamp data management services including the implementation of effective data backup and recovery strategies has become central. Comprehensive data backup continues to evolve, and AI and ML have become potent tools in this field, revolutionizing the way organizations approach data backup and recovery. In 2023, we will witness an increase in the adoption of AI/ML technologies such as self-monitoring and management of IT assets and automation along with orchestration of IT activities across on-premises and the cloud. AI will play an increasingly important role both for malicious purposes and to build more proactive and pre-emptive strategies. To enable a competitive advantage to all our customers, our data protection capabilities are fuelled by AI/ML at their core. Our self-driving backup uses AI and ML to automate backup and recovery operations and management, including setup, monitoring, deep visibility real-time insights, and service level agreement (SLA) tracking.



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - June 26, 2023

Generative AI Gets a SaaSy Touch

Although pure-play Generative AI startups, Rephrase.ai, Blend, ProbeAI, Fasthr.AI to name a few, are not that common yet in India, it has set foot into many industries such as healthcare, music and art, finance, advertising and marketing, gaming and entertainment, among others. This has opened up huge opportunities for the SaaS (software as a service) sector. Tapping into the opportunity are India's leading SaaS companies, who are following the footsteps of global software giants like SAP, Salesforce, and IBM and investing in Generative AI like never before. ... Other prominent SaaS companies are also investing heavily in Generative AI. Freshworks recently unveiled reddy Self-Service, Freddy Copilot and Freddy Insights to make AI more accessible to every workplace. The new predictive and assistive generative AI capabilities embedded within Freshworks solutions and platform are said to go beyond content generation and help support agents, sellers, marketers, IT teams and leaders become more efficient with a revolutionary new way to interact with their business software.


When low-code and no-code can accelerate app modernization

“Software architecture requires broadening our perspective and considering the larger system, a wider range of possibilities, and a longer time scale,” says Andrew Davis, senior director of methodology at Copado. “Legacy applications are living proof that a significant portion of the cost of software is in its long-term maintenance. Low-code and no-code applications are designed to reduce complexity and thus increase maintainability over time.” Low-code or no-code platforms can help accelerate application modernization, but not in every case. You need a good match between the application's business requirements, user experience, data, architecture, integrations, security needs, compliance factors, and reporting with a low or no-code platform’s capabilities. ... Some types of applications and use cases are better candidates for low-code and no-code. Applications used for departmental business processes such as approval workflows, onboarding, content management, work queues, request processing, budget management, and asset tracking are high on the list.


The CISO’s toolkit must include political capital within the C-suite

Where a security leader sits in a company’s pecking order or to whom they report “is fundamentally irrelevant, because every organization sees things differently,” according to John Stewart, president of Talons Ventures and a former chief security and trust officer at Cisco. “The relevant piece is access, support, authorities, and accountability,” Stewart tells CSO. Stewart has cautioned CISOs many times to be careful of the “I need to report to the CEO to be effective” instinct. “That suggests either the business, the culture, or the individual are ineffective.” A more effective approach should be, according to Stewart: “I need access to the CEO with their support and a clear understanding of my responsibilities and authorities that is backed up with action.” This is pretty much in line with the thinking of Malcolm Harkins, former CISO at Intel and other entities, who tells CSO that it is “unimportant” to whom an individual CISO reports. “The CISO is the one who should be responsible and accountable for mitigating risk,” he says. 


Is starting with a startup the right choice?

When presented with the choice between failing with a plan or succeeding without one, many startup founders and teams lean towards the latter. The prevailing belief is that success is the ultimate goal, regardless of the presence or absence of a plan. However, this perspective fails to consider the reliability, repeatability, and scalability of such success. Without a plan, it becomes uncertain whether the success can be attributed to one's own efforts, sheer luck, or the misfortune of competitors. Relying on hope alone is not a sound strategy. In the fast-paced world of startups, the constant action and hustle often mask personal weaknesses and systemic deficiencies. It is only when the economic tide recedes that the true vulnerabilities of startups are exposed, as astutely pointed out by Warren Buffet's observation, "Only when the tide goes out do you discover who's been swimming naked." During a favorable economic climate, many startups may appear successful without a well-defined strategy. They simply need to be in the right place at the right time. 


Office workers feel AI is better than a human boss

Looking at employee concerns around AI bosses, softer skills and capabilities is the key area respondents think a robot would lack. In addition, just under half of employees think they would struggle to see a robot as an authoritative figure. Despite this, over one in five (22%) admit they would feel more comfortable talking about their frustrations at work to a robot over their boss. According to Business Name Generator, this may be due to not wanting to cause conflict or emotional distress that is part of human interaction. The survey found that 18% would trust a robot boss over their current one. The office workers polled said that a lack of appreciation is the biggest frustration they feel about their bosses, with 14% experiencing this currently. Micromanagement, being a “know it all”, and lacking patience are among the frustrations featured in the top 10, with over one in 10 respondents experiencing this. Poor management skills such as bosses who are disorganised or have unclear expectations also featured in the top 10 of complaints.


A Walkthrough of Adopting Infrastructure as Code

The world of cloud infrastructure is a bit daunting. Pulumi supports over 100 clouds. AWS has over 200 services with over 1,000 individual resources and over 300,000 configurable properties across all of them. Pulumi is a multicloud tool, but “multicloud” does not mean “lowest common denominator.” Instead, Pulumi exposes all those individual clouds, resources and properties in their raw, unadulterated form. The benefit of this is that you have the entire capabilities of all of these clouds right at your fingertips. The downside is that to use them you need to understand these clouds and how to use them properly. As a result, you’ll probably quickly find that you want a starting point, rather than a blank page. Pulumi Templates are a good way to get started. They represent over a dozen of the most common application and infrastructure architectures on the most popular clouds. They were built to be simple enough to be understandable at a glance but complete enough to be useful in practice. 


Data Governance in Higher Ed is Critical. Here’s How to Achieve and Sustain It.

“It’s important to remember, however, that data is an institutional asset,” she says. Divided as institutions are by different schools and academic departments — not to mention professional departments like IT, enrollment or student life — silos are often unavoidable in higher education. Those silos often are one of the biggest barriers to developing and implementing an effective data governance strategy across an institution. That’s because data governance works best as a Venn diagram than in silos, says Matthew Hagerty, a consultant who specializes in IT, efficiency and analytics, and faculty engagement at EAB, a Washington D.C.-based education consulting firm. “Make sure the right people are in the room to craft that policy,” Hagerty says. He says that many times during initial data governance meetings, “maybe halfway through, someone will raise their hand and ask, “‘Wait a second. Why isn’t Bob from finance here? Who’s representing human resources in this committee?’”


Hate being more productive? Ignore AI agents

Business school professor and technologist Ethan Mollick offers what I’ve found to be very useful framing for how to think about generative AI: “It is not good software, [rather] it is pretty good people.” And rather than thinking about AIs as people who replace those already on the payroll, treat them like “eager interns” that can help them be more productive. This metaphor can help on two fronts. First, it keeps the need for human supervision front and center. Just as hiring and productively managing interns is a valuable competency for an organization, so too is using ChatGPT, Microsoft’s CoPilot, or Google’s Bard. But you would no more blindly trust this class of model than you would even the most promising intern. Second, and as important: IT isn’t responsible for hiring interns in Finance and HR. Likewise, Finance and HR (and every other function) must build their own competency i figuring out how to use these tools to be more productive. The job to be done is closer to answering domain-specific staffing questions than IT questions.


Top Tips for Weeding Out Bad Data

Bad data often really means low quality data. In this case, it’s up to the data owner to define the acceptable level of quality in terms of relevance, accuracy, age, or other criteria. “But bad data can also mean inappropriate data, in which case “appropriate” would need to be defined,” says Erik Gfesser, director and chief architect at business advisory firm Deloitte Global. One enterprise’s highly useful data might be meaningless to another. Since many use cases aren’t particularly demanding, data quality doesn’t always have to adhere to the same standards. “As such, judgment often needs to be used to determine what’s appropriate,” he explains. It’s also important to check for duplicate records, which can be caused by data entry errors or identical data being retrieved from multiple sources. “A clearly defined data governance program and an enterprise-level data pipeline design that’s shared enterprise-wide are the best ways to prevent duplicate records,” Shah recommends. It’s possible to identify outliers and detect anomalies by comparing values that appear to be significantly different from the rest of the data or by running statistical tests, such as regression analysis, hypothesis testing, or correlation analysis, to identify patterns in data, Shah says.


Choosing the Right Data Architecture

"If it turns out that none of those references is close to your scale, doing what you want to do, then you know you're well beyond the frontier of the vendor’s product." If that’s the case, then you need to conduct tests to help control and manage your risk. "The best kind of test is a full-scale, realistic benchmark, and the best case is where you have more than one credible vendor." Winter recommends testing two or three solutions and comparing the results. You can see if any vendor can demonstrate they have the capability to meet your most critical requirements. If multiple vendors pass this test, then examine differences in cost, complexity, and the agility of the solution. "These differences can be very revealing. Once you've illuminated what's going on via testing, you can get into much deeper conversations with the vendor about what you're seeing in the behavior of the system. We've had remarkable experiences doing this, even with the most modern systems in the cloud."



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - June 25, 2023

Traffic Routing in Ambient Mesh

The ambient mesh deployment model is much leaner than the sidecar data plane deployment model, allowing for incremental adoption of service mesh features and making it less risky. As ambient mesh includes fewer components, this leads to reduced infrastructure costs and performance improvements, as captured in this blog post. Ambient mesh does all this while retaining all the service mesh critical features, including zero trust security. ... The new Rust-based ztunnel proxy is responsible for mTLS, authentication, L4 authorization and telemetry in the ambient mesh. Its job is to proxy the traffic between ambient mesh pods. Optionally, the ztunnel proxies to L7 waypoint proxies, ingress and, in the future, egress proxies. Ztunnels on different nodes establish a tunnel using HBONE (HTTP-Based Overlay Network Environment). Similarly, the tunnel gets established between the ztunnel and the waypoint proxy, if one exists. The tunnel that’s established between the ztunnels allows the source ztunnel to connect to the destination workload on behalf of the source workload.


Unleashing Business Growth: The Power of Adopting Enterprise Architecture

Enterprise architecture plays a vital role in the success and growth of modern businesses. By aligning business and IT strategies, enhancing agility, optimizing resources, mitigating risks, and fostering innovation, EA provides a solid foundation for sustained growth and competitive advantage. As businesses continue to navigate an increasingly complex landscape, leveraging the business-critical values of Enterprise Architecture becomes imperative to welcome new opportunities and drive long-term success. So, whether you are a business leader, IT professional, or decision-maker, embracing EA as a strategic imperative will position your organization for growth, resilience, and innovation in the ever-changing business landscape. Remember, an ingenious Enterprise Architecture Development is not a one-time effort but an ongoing journey of adaptation and improvement. It requires collaboration, commitment, and continuous refinement to realize its full potential in driving business growth.


IT firms expect to increase hiring next quarter, ManpowerGroup says

Among the skills most in demand in IT are project managers, business analysts, and software developers. "I wish we could clone full stack developers. We can't find enough of them," Doyle said. In past years, ManpowerGroup’s survey has been conducted by telephone. This year, it was done online. Regionally, the strongest hiring intentions for next quarter are in the west, with 43% of employers planning to add to workers, according to ManpowerGroup. In the northeast, 40% of employers plan to increase staff; the midwest is expected to see a 32% increase; and companies in the south are expected to boost hiring by 29%. Large organizations with more than 250 employees are more than three times as optimistic as small firms (with fewer than 10 employees) to hire in the next quarter, with employment outlooks of +47% and +14%, respectively. Earlier this month, the US Bureau of Labor Statistics (BLS) released its hiring data for the month of May; it showed a 0.3% increase in overall unemployment — from 3.4% to 3.7%.


Building Effective Defenses Against Social Engineering

In addition to awareness training and education, quite a number of technologies are available to augment and fortify efforts to limit the impact of social engineering attacks. Cloud-based email security gateways are just one example. Depending on budget, staffing, age of existing infrastructure, the value of the assets to be protected and other aspects, a layered defense strategy may range from relatively low-cost and simple to more elaborate (and expensive) endeavors. Enforcement of strong passwords is an example of a relatively cheap, easy and fast tactic that can be highly effective in averting data breaches and other cyberattacks. Other strategies and techniques can be rolled out in parallel with existing technologies to minimize disruption while preparing for a new, stronger security infrastructure. A zero-trust network architecture (ZTNA) is one such example; it can be deployed alongside a secure sockets layer (SSL) virtual private network (VPN), working as an overlay at first to boost security and eventually replacing it.


Data Breach Lawsuit Alleges Mismanagement of 3rd-Party Risk

The latest GoAnywhere-related lawsuit alleges that ITx could have prevented the theft of sensitive data "had it limited the patient information it shared with its business associates and employed reasonable supervisory measures to ensure that adequate data security practices, procedures and protocols were being implemented and maintained by business associates." ITx's "collective inadequate safeguarding and supervision of class members' private information that they collected and maintained, and its failure to adequately supervise its business associates, vendors and/or suppliers" has put the plaintiffs and class members at risk for ID fraud and theft crimes, the complaint also alleges. The lawsuit says victims will be at higher risk for phishing, data intrusion and other illegal schemes through the misuse of their private information. It also points out that their data is still held by ITx and could be exposed to future breaches without the court's corrective action. The lawsuit seeks monetary damages, lifetime credit and identity monitoring for the plaintiff and class members, as well as a court order for ITx to take measures to prevent any future similar data security incidents.


Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?

Attorney Richard Santalesa, a founding member of the SmartEdgeLaw Group based in Westport, Conn., focuses on technology transactions, data security, and intellectual property matters. He points out that there are issues of contract law as well as copyright law -- and they're treated differently. From a contractual point of view, Santalesa contends that most companies producing AI-generated code will, "as with all of their other IP, deem their provided materials -- including AI-generated code -- as their property." OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title and interest in and to Output." Clearly, though, if you're creating an application that uses code written by an AI, you'll need to carefully investigate who owns (or who claims to own) what. For a view of code ownership outside the US, ZDNET turned to Robert Piasentin, a Vancouver-based partner in the Technology Group at McMillan LLP, a Canadian business law firm.


Shadow SaaS, changing contracts and ChatGPT adoption: SaaS trends to watch

As more companies move to remote work, many find that shorter (one-year) contracts are preferable because they allow for more flexibility. Reducing contract lifetime is also a way for organizations to reduce overhead costs. One-year contracts accounted for 79% of all contracts in 2020 and 85% of all contracts in 2022. Three-year and longer contracts declined the most year-over-year. In 2023, SaaS spend per employee averaged $9,643. Large businesses spent an average of $7,492 per employee in 2022, while medium-sized businesses spent $10,045 and small and medium-sized businesses spent $11,196. The large businesses spent less because they received volume discounts and enterprise-wide licensing agreements, as well as better efficiency of scale with consumption-based apps, Productiv said. “To avoid shadow IT, organizations need to develop appropriate SaaS governance policies that help teams take their free and purchased apps out of the shadows and ensure the right level of corporate policies for procurement, security and compliance,” Chandarana said.


How AI is reshaping demand for IT skills and talent

AI opens new doors for security threats and compliance issues as well that organizations must be prepared to address. “On the technical side, I see security as hugely important,” says Hendrickson. “A lot of companies say, ‘We’re not letting people touch ChatGPT yet, we’re just not allowing it—it’s blocked.’” But end-users’ propensity for finding ways to improve their work processes will no doubt lead to greater levels of shadow IT around such emerging technologies, and thus, security implications will eventually need to be tackled beyond simply trying to hold back the tide. Moreover, Hendrickson points to the fact that just a few years ago, discussions around machine learning centered around its ability to break encryption, and with quantum machine learning on the horizon, that concern has only increased. As companies navigate AI in the workplace, they’re going to need skilled professionals who can identify potential risks and pinpoint possible solutions. There are also increased complexities around “managing the infrastructure and platforms that provide resources to power applications, and to store and access data,” says Kim.


Decision Rights Rule the World – Architecture Design Part 3

Think of the number of decisions made related to technology daily in your organization. Try to imagine, every library, product, SaaS tool, vendor agreement, pattern, style, and reference model that is being chosen by one or more people. From huge (ERP, standardizing a single cloud vendor, information management structures) to small (library dependency, pattern application to code, GitHub structure). The real question is, how many of those are architecturally relevant (Note: it is NOT all of them)? And how many of them come with a decision record of any kind? I have asked this question of countless audiences and teams over time. The answer is… almost none. And that is scary. We end up with WHAT we decided, not WHY we decided. Traceability, understanding, decision excellence are all thrown out the window because we think it might take too, long. Just FYI, whenever I have implemented decision management in teams, important decisions (structural, value-based, etc) go FASTER not slower. The decision record allows us to focus on apples to apples instead of long-winded, emotionally charged, opinion-heavy, biased arguments.


Structured for Success: 4 Architectural Pillars of Cyber Resilience

Having centralized visibility is fundamental to not only taking control of cloud environments but also bridging silos. In a recent survey conducted by Forrester, 83% of IT decision-makers said a single consolidated view for managing their organizations’ cloud and IT services would help achieve their business outcomes — including improving their cybersecurity posture. ... Immutable data storage enables the storing of data after it is written, such that it's impossible to change, erase or otherwise interfere with it. This functionality guards against malware, ransomware, and both unintentional and malicious human behavior. Since it effectively protects data against any change or erasure, as would be typical in a ransomware attack that tries to encrypt data, immutability is commonly regarded as a prerequisite in the battle against ransomware. ... Beyond this 3-2-1 rule, organizations need a scalable backup and recovery infrastructure — one that makes management fast and simple – to sustain business continuity and operations in the current cybersecurity landscape.



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - June 24, 2023

Technologists Want to Develop Software Sustainably

Ingrid Olson, principal with application security at Coalfire, explains organizations such as the Green Software Foundation can help educate and unite developers interested in learning more about green development practices. "In the right economy developers can also 'vote with their feet' by seeking out employment with companies that advocate for more environmentally sustainable development practices," she says. She adds everyone is a stakeholder either directly or indirectly in the effort to create more sustainable software development practices. "In addition to the environmental benefits of green software development, in the long term there are also a lot of potential financial benefits from development practices that result in reduced carbon emissions," Olson says. ... Tegan Keele, KPMG US climate and data technology leader, explains software that includes complex computational models, like AI and ML, typically require more computing "horsepower" to develop, test and run. "The more intensive that process, the greater proportion of a data center’s computing power that process takes up," he says.


Shift in Sprint Review Mindset: from Reporting to Inclusive Ideation

It's useful to remember that our brains are wired to expect things based on what we've experienced before: it's extremely helpful when the situation is similar, but it can also prevent us from being open to new but important nuances. The new corporate language, certain experiences and time spirits should be learned. There may be times when your attention can make a big difference. For example, you may share a seemingly mundane suggestion in a meeting, only to notice a distinct shift in the atmosphere of the room. It's like walking into a bad neighborhood in a new country and instinctively feeling like an outsider. The level of danger is different, of course, but in both cases it's important to investigate and learn from these new experiences. Therefore, like a seasoned traveler, change agents should be extremely open-minded and strive to understand the culture they're entering, while not blending in and staying true to their values and beliefs. It's important to understand the value and function of the current Sprint Review processes, while resisting "it won't work in our environment" and other skepticism. 


The Rise of Developer Native Dynamic Observability

Dynamic observability comes to address and solve these challenges. Basically, as opposed to static logging, with dynamic observability developers enjoy end-to-end observability across app deployments and environments directly from their IDEs. This translates into reduced MTTR, enhanced developer productivity and overall cost optimization since developers debug and consume logs and telemetry data where and when they need it rather than monitoring everything. Dynamic observability has emerged as a pivotal approach in modern software development, enabling teams to gain deep insights into system behavior and make informed decisions. It goes beyond traditional testing and monitoring methodologies, offering a comprehensive understanding of system patterns, strengths and weaknesses. ... Dynamic observability represents a paradigm shift in software development, enabling developers to gain a detailed understanding of system behavior and make informed decisions. Using tools and practices that go beyond traditional testing, it empowers teams to create robust and reliable systems.


Monolithic or microservices: which architecture best suits your business?

In the monolithic world, you’re dealing with one single codebase. The simplicity of this model makes it a great choice for small-to-medium-sized applications. But, as the business grows, so do the challenges. Every change, no matter how small, requires a full redeployment. Scaling particular functions can turn into a headache, with these slowing down your go-to-market speed and impacting your responsiveness. On the other hand, the microservices approach works like a small, self-contained team that collaborates, but can also work independently. This architecture gives you the flexibility of scaling, updating, and deploying each service independently — great for scalability, but with added complexity. Imagine trying to coordinate different teams spread out around the world, each with its own time zone and function. Managing microservices is a bit like that. Choosing the right architectural style isn’t just about handling the technology stack, it’s about aligning your tech with your business strategy. 


Microsoft slammed for hitting European cloud users with ‘unfair, additional’ charges

The issue can be traced back to a Microsoft licensing-related policy change in 2019 that stopped customers from deploying on-premise Office 365 licenses on third-party infrastructure. According to the report, this move may have generated an estimated €560m in first-year license repurchase costs for European enterprises. “An additional surcharge of €1bn, relating to licensing surcharges imposed on non-Azure deployments of SQL Server, may further be attributed to the policy change,” said the report. “If this Microsoft tax equals €1bn per year for just one product among potentially hundreds, the overall cost to the European economy as it looks to move enterprise and productivity computing to the cloud must be estimated to be significantly higher.” It goes on to make the point that this additional spend is money that could be used to accelerate the pace of digital transformation for European enterprises and, in the case of the public sector, this is taxpayers’ money that is being “unfairly diverted to already-dominant players”.


Making Better Data-Informed Decisions to Navigate Disruptions

Traditionally, companies have managed risks across domains that, while often volatile, were nevertheless limited in scope. Market dynamics, disruptive technology, and regulatory risks can change dramatically quarter to quarter, for example, but business leaders often rely on several key assumptions about broader global trends. However, the events of recent years have made manifest that business and political leaders can no longer rely on these assumptions. A lingering pandemic and its impacts have drawn into question traditional supply chain and risk management approaches. Social and political concerns have introduced new regulatory risks to businesses across industries. Global economic uncertainty lingers. Climatic risks require business to reconsider both their current supply chain strategies and long-term geographic footprints. Finally, geopolitical risks—including war and sanctions —and the uncertainty of some international agreements have upended traditional assumptions about the security of long-term investments


Six skills you need to become an AI prompt engineer

Prompt engineering is fundamentally the creation of interactions with generative AI tools. Those interactions may be conversational, as you've undoubtedly seen (and used) with ChatGPT. But they can also be programmatic, with prompts embedded in code, the rough equivalent of modern-day API calls; except, you're not simply calling a routine in a library, you're using a routine in a library to talk to a vast large language model. Before we talk about specific skills that will prove useful in landing that prompt engineering gig, let's talk about one characteristic you'll need to make it all work: a willingness to learn. While AI has been with us for decades, the surge in demand for generative AI skills is new. The field is moving very quickly, with new breakthroughs, products, techniques, and approaches appearing constantly. To keep up, you must be more than willing to learn -- you must be voracious in learning, looking for, studying, and absorbing everything you possibly can find. If you keep up with your learning, then you'll be prepared to grow in this career.


Author Talks: Create your ‘reinvention road map’ in four easy steps

The first step, the search, is fascinating. This is when you are collecting information, collecting experiences. What’s key about it is that most people don’t realize it’s unintentional. This is the stuff that is going to take you to your transition, to your reinvention, but you don’t know it at the time. For career people, maybe it’s a side hustle or just a random interest, a hobby. That’s the search. The second step is the struggle. The struggle is where you have disconnected, or you’re starting to disconnect, from that previous identity, but you have not figured out where you are going. It’s really uncomfortable, and we don’t like to talk about it. When we tell these reinvention stories, we tend to skip over this part. But it’s incredibly important, as the struggle is where all the important work gets done. The struggle often doesn’t end until you hit the third step, the stop. The stop might be something that you initiate: for example, I quit my job. But it may be something imposed on you—for example, you lose your job. Or it could be a trauma, like a divorce or an illness in the family or a pandemic. 


6 strategic imperatives for your next data strategy

In many industries, depending on how your customers consume and extract value from your products and services, your data can be monetized across multiple layers in the tech stack, from raw data itself and data with various forms of post-processing applied for added insights, to data consumed via visualization and analytics tools, and data consumed via industry applications such as digital twins. In the architecture, engineering and construction (AEC) industry, for example, these scenarios might include geospatial data like aerial imagery offered directly via an ecommerce-enabled website, drone-based photogrammetry of roadways and bridges with AI-enabled defect analysis, like Manam, traffic congestion data visualized via a GIS platform, like Urban SDK, or EV charging data provided by a live digital twin. ... Look for opportunities to combine your own data with third-party data, including open data, where applicable, for added value and for tools that support data ingestion, transformation, and integration to feed into a variety of analysis tools including GIS and digital twins.


China-sponsored APT group targets government ministries in the Americas

The campaign ran from late 2022 into early 2023. It also targeted a government finance department in a country in the Americas and a corporation that sells products in Central and South America. There was also one victim based in a European country, according to the report. ... Graphican can create an interactive command line that can be controlled from the server, download files to the host, and set up covert processes to harvest data of interest. This technique was used earlier by the Russian state-sponsored APT group Swallowtail in a campaign in 2022 to deliver the Graphite malware. “Once a technique is used by one threat actor, we often see other groups follow suit, so it will be interesting to see if this technique is something we see being adopted more widely by other APT groups and cybercriminals,” Symantec said in its report. Flea has been in operation since at least 2004. Initially, it used email as the initial infection vector, but there have also been reports of it exploiting public-facing applications, as well as using VPNs, to gain initial access to victim networks. 



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - June 22, 2023

Mass adoption of generative AI tools is derailing one very important factor, says MIT

Many companies "were caught off guard by the spread of shadow AI use across the enterprise," Renieris and her co-authors observe. What's more, the rapid pace of AI advancements "is making it harder to use AI responsibly and is putting pressure on responsible AI programs to keep up." They warn the risks that come from ever-rising shadow AI are increasing, too. For example, companies' growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI -- algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio -- exposes them to new commercial, legal, and reputational risks that are difficult to track. The researchers refer to the importance of responsible AI, which they define as "a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact."


From details to big picture: how to improve security effectiveness

Benjamin Franklin once wrote: “For the want of a nail, the shoe was lost; for the want of a shoe the horse was lost; and for the want of a horse the rider was lost, being overtaken and slain by the enemy, all for the want of care about a horseshoe nail.” It’s a saying with a history that goes back centuries, and it points out how small details can lead to big consequences. In IT security, we face a similar problem. There are so many interlocking parts in today’s IT infrastructure that it’s hard to keep track of all the assets, applications and systems that are in place. At the same time, the tide of new software vulnerabilities released each month can threaten to overwhelm even the best organised security team. However, there is an approach that can solve this problem. Rather than looking at every single issue or new vulnerability that comes in, how can we look for the ones that really matter? ... When you look at the total number of new vulnerabilities that we faced in 2022 – 25,228 according to the CVE list – you might feel nervous, but only 93 vulnerabilities were actually exploited by malware. 


3 downsides of generative AI for cloud operations

While we’re busy putting finops systems in place to monitor and govern cloud costs, we could see a spike in the money spent supporting generative AI systems. What should you do about it? This is a business issue more than a technical one. Companies need to understand how and why cloud spending is occurring and what business benefits are being returned. Then the costs can be included in predefined budgets. This is a hot button for enterprises that have limits on cloud spending. The line-of-business developers would like to leverage generative AI systems, usually for valid business reasons. However, as explained earlier, they cost a ton, and companies need to find either the money, the business justification, or both. In many instances, generative AI is what the cool kids use these days, but it’s often not cost-justifiable. Generative AI is sometimes being used for simple tactical tasks that would be fine with more traditional development approaches. This overapplication of AI has been an ongoing problem since AI was first around; the reality is that this technology is only justifiable for some business problems.


Pros and cons of managed SASE

If a company decides to deploy SASE by going directly through SASE vendors, they’ll have to configure and implement the service themselves, says Gartner’s Forest. “The benefits of a managed service provider are a single source for all setup and management, the ability to redeploy internal resources for other tasks, and the ability to access skills and capabilities that don’t exist internally,” he says. Getting in-house IT staff with the right expertise to handle SASE can be a real challenge, particularly in today’s hiring climate: 76% of IT employers say they’re having difficulty finding the hard and soft skills they need, and one in five organizations globally is having trouble finding skilled tech talent, according to a 2023 survey by ManpowerGroup. The access to outside experts is particularly appealing to companies that don’t have the resources to manage SASE themselves. Managed SASE providers have specialized expertise in deploying and managing SASE infrastructure, says Ilyoskhuja Ikromkhujaev, software engineer at software developer Nipendo. “Which can help ensure that your system is set up correctly and stays up to date with the latest security features and protocols,” he says.


The security interviews: Exploiting AI for good and for bad

AI has moved beyond automation. Looking at large language models, which some industry experts see as representing the tipping point that ultimately leads to wide-scale AI adoption, Heinemeyer believes that an AI capable of writing code offers attackers the opportunity to develop much more bespoke and tailored, sophisticated attacks. Imagine, he says, highly personalised phishing messages that have error-free grammar and no spelling mistakes. For its customers, he says Darktrace uses machine learning to learn what normal looks like in business email data: “We learn exactly how you communicate, what syntax you use in your emails, what attachments you receive, who you talk to, and when this is internal or external.We can detect if somebody sends an email that is unusual for you.” A large language model like ChatGPT reads everything that is on the public internet. The implication is that it will be reading people’s social media profiles, seeing who they interact with, their friends, what they like and do not like. Such AI systems have the ability to truly understand someone, based on the publicly available information that can be gleaned across the web. 


Switching the Blame for a More Enlightened Cybersecurity Paradigm

The “blame the user” mentality is a cognitive bias that ignores the complexities of human-computer interaction. Research in cognitive psychology and human factors engineering has shown that humans are not designed to be perfect digital operators. Mistakes are a natural part of our interaction with systems, especially those that are complex and non-intuitive. Moreover, our susceptibility to scams and manipulation is not just a personal failing, but a product of millennia of evolution. For instance, social engineering attacks exploit our natural tendency to trust and cooperate, which have been crucial to human survival and societal development. To put the onus on the individual is to ignore the broader context. Shifting the blame is an easy way out. It absolves organizations of the responsibility to address systemic issues and allows them to maintain the status quo. This is underpinned by the “just-world hypothesis,” a cognitive bias which propounds that people get what they deserve. When an employee falls for a scam, it's easy to assume that they were careless or ill-prepared.


Standardized information sharing framework 'essential' for improving cyber security

Security experts have called for improvements in how private sector organizations share threat intelligence data with the wider industry. It’s believed that better cross-organizational collaboration would improve cyber resiliency in the face of cyber attacks that continue to rise in frequency and develop ever more sophisticated. “I think this is one of the ways in which the private sector can work with governments around the world, and each other across sectors, industries, and regions,” said Jen Ellis, co-chair at the Institute for Science and Technology’s Ransomware Task Force. Government agencies such as the UK’s Information Commissioner’s Office (ICO) or the US’ Cybersecurity and Infrastructure Security Agency (CISA) enforce strict reporting deadlines around data breaches, but companies often report the minimum required information. The designated cyber security authorities in the UK and US enforce strict reporting deadlines around data breaches and this is seen as a positive step. However, victims often report the minimum required information which in turn reduces other organizations’ ability to learn from, and potentially prevent, follow-on attacks.


Hybrid Microsoft network/cloud legacy settings may impact your future security posture

Often in large organizations, there are users in your network who have the equivalent of Domain administrative rights and are not even aware of this. Your firm may have even inherited the setup of the domain with original accounts and permissions set for a Novell network that was migrated from years before. Often the difference between a firm with better security and one with poor security is having a staff that takes the additional time to test and confirm that there will be no side effects in the network if changes are made. Take the example of unconstrained delegation; this is a setting that many web applications need to function, including those that are internal only to the organization. But this setting can expose the domain to excessive risk. Delegation allows a computer or server to save the Kerberos authentication tickets. Then these saved tickets are used to act on the user’s behalf. Attackers love to grab these tickets, as they can then interact with the server and impersonate the identity and in particular the privileges of those users.


Why we don't have 128-bit CPUs

You might think 128-bit isn't viable because it's difficult or even impossible to do, but that's actually not the case. Lots of parts in processors, CPUs and otherwise, are 128-bit or larger, like memory buses on GPUs and SIMDs on CPUs that enable AVX instructions. We're specifically talking about being able to handle 128-bit integers, and even though 128-bit CPU prototypes have been created in research labs, no company has actually launched a 128-bit CPU. The answer might be anticlimactic: a 128-bit CPU just isn't very useful. A 64-bit CPU can handle over 18 quintillion unique numbers, from 0 to 18,446,744,073,709,551,615. By contrast, a 128-bit CPU would be able to handle over 340 undecillion numbers, and I guarantee you that you have never even seen "undecillion" in your entire life. Finding a use for calculating numbers with that many zeroes is pretty challenging ... Ultimately, the key reason why we don't have 128-bit CPUs is that there's no demand for a 128-bit hardware-software ecosystem. The industry could certainly make it if it wanted to, but it simply doesn't.


Data sovereignty and security driving hybrid IT adoption in Australia

According to Nutanix’s fifth global Enterprise cloud index survey, data sovereignty was the top driver of infrastructure decisions in Australia, with 15% of local respondents citing that as the most important criteria when considering infrastructure investments. Data sovereignty was also one of the top three considerations for over a third (37%) of enterprises in Australia. “Control and security are the biggest factors Australian organisations are weighing up when transforming their IT infrastructure,” said Jim Steed, managing director of Nutanix Australia and New Zealand. “While public cloud was seen as a panacea for many years, it’s becoming increasingly clear that cloud is a tool – not a destination. Some workloads and applications are perfectly suited to a public cloud, but Australian organisations are moving their most sensitive and business-critical workloads back home to their on-premises infrastructure.” According to the study, over half of Australian organisations are planning to repatriate some applications from the public cloud to on-premise datacentres in the next 12 months due to data sovereignty concerns.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard