Showing posts with label data model. Show all posts
Showing posts with label data model. Show all posts

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - May 09, 2025


Quote for the day:

"Create a compelling vision, one that takes people to a new place, and then translate that vision into a reality." -- Warren G. Bennis


The CIO Role Is Expanding -- And So Are the Risks of Getting It Wrong

“We are seeing an increased focus of organizations giving CIOs more responsibility to impact business strategy as well as tie it into revenue growth,” says Sal DiFranco, managing partner of the global advanced technology and CIO/CTO practices at DHR Global. He explains CIOs who are focused on technology only for technology's sake and don’t have clear examples of business strategy and impact are not being sought after. “While innovation experience is important to have, it must come with a strong operational mindset,” DiFranco says. ... He adds it is critical for CIOs to understand and articulate the return on investment concerning technology investments. “Top CIOs have shifted their thinking to a P&L mindset and act, speak, and communicate as the CEO of the technology organization versus being a functional support group,” he says. ... Gilbert says the greatest risk isn’t technical failure, it’s leadership misalignment. “When incentives, timelines, or metrics don’t sync across teams, even the strongest initiatives falter,” he explains. To counter this, he works to align on a shared definition of value from day one, setting clear, business-focused key performance indicators (KPIs), not just deployment milestones. Structured governance helps, too: Transparent reporting, cross-functional steering committees, and ongoing feedback loops keep everyone on track.


How to Build a Lean AI Strategy with Data

In simple terms, Lean AI means focusing on trusted, purpose-driven data to power faster, smarter outcomes with AI—without the cost, complexity, and sprawl that defines most enterprise AI initiatives today. Traditional enterprise AI often chases scale for its own sake: more data, bigger models, larger clouds. Lean AI flips that model—prioritizing quality over quantity, outcomes over infrastructure, and agility over over-engineering. ... A lean AI strategy focuses on curating high-quality, purpose-driven datasets tailored to specific business goals. Rather than defaulting to massive data lakes, organizations continuously collect data but prioritize which data to activate and operationalize based on current needs. Lower-priority data can be archived cost-effectively, minimizing unnecessary processing costs while preserving flexibility for future use. ... Data governance plays a pivotal role in lean AI strategies—but it should be reimagined. Traditional governance frameworks often slow innovation by restricting access and flexibility. In contrast, lean AI governance enhances usability and access while maintaining security and compliance. ... Implementing lean AI requires a cultural shift in how organizations manage data. Focusing on efficiency, purpose, and continuous improvement can drive innovation without unnecessary costs or risks—a particularly valuable approach when cost pressures are increasing.


Networking errors pose threat to data center reliability

“Data center operators are facing a growing number of external risks beyond their control, including power grid constraints, extreme weather, network provider failures, and third-party software issues. And despite a more volatile risk landscape, improvements are occurring.” ... “Power has been the leading cause. Power is going to be the leading cause for the foreseeable future. And one should expect it because every piece of equipment in the data center, whether it’s a facilities piece of equipment or an IT piece of equipment, it needs power to operate. Power is pretty unforgiving,” said Chris Brown, chief technical officer at Uptime Institute, during a webinar sharing the report findings. “It’s fairly binary. From a practical standpoint of being able to respond, it’s pretty much on or off.” ... Still, IT and networking issues increased in 2024, according to Uptime Institute. The analysis attributed the rise in outages due to increased IT and network complexity, specifically, change management and misconfigurations. “Particularly with distributed services, cloud services, we find that cascading failures often occur when networking equipment is replicated across an entire network,” Lawrence explained. “Sometimes the failure of one forces traffic to move in one direction, overloading capacity at another data center.”


Unlocking ROI Through Sustainability: How Hybrid Multicloud Deployment Drives Business Value

One of the key advantages of hybrid multicloud is the ability to optimise workload placement dynamically. Traditional on-premises infrastructure often forces businesses to overprovision resources, leading to unnecessary energy consumption and underutilisation. With a hybrid approach, workloads can seamlessly move between on-prem, public cloud, and edge environments based on real-time requirements. This flexibility enhances efficiency and helps mitigate risks associated with cloud repatriation. Many organisations have found that shifting back from public cloud to on-premises infrastructure is sometimes necessary due to regulatory compliance, data sovereignty concerns, or cost considerations. A hybrid multicloud strategy ensures organisations can make these transitions smoothly without disrupting operations. ... With the dynamic nature of cloud environments, enterprises really require solutions that offer a unified view of their hybrid multicloud infrastructure. Technologies that integrate AI-driven insights to optimise energy usage and automate resource allocation are gaining traction. For example, some organisations have addressed these challenges by adopting solutions such as Nutanix Cloud Manager (NCM), which helps businesses track sustainability metrics while maintaining operational efficiency.


'Lemon Sandstorm' Underscores Risks to Middle East Infrastructure

The compromise started at least two years ago, when the attackers used stolen VPN credentials to gain access to the organization's network, according to a May 1 report published by cybersecurity firm Fortinet, which helped with the remediation process that began late last year. Within a week, the attacker had installed Web shells on two external-facing Microsoft Exchange servers and then updated those backdoors to improve their ability to remain undetected. In the following 20 months, the attackers added more functionality, installed additional components to aid persistence, and deployed five custom attack tools. The threat actors, which appear to be part of an Iran-linked group dubbed "Lemon Sandstorm," did not seem focused on compromising data, says John Simmons, regional lead for Fortinet's FortiGuard Incident Response team. "The threat actor did not carry out significant data exfiltration, which suggests they were primarily interested in maintaining long-term access to the OT environment," he says. "We believe the implication is that they may [have been] positioning themselves to carry out a future destructive attack against this CNI." Overall, the attack follows a shift by cyber-threat groups in the region, which are now increasingly targeting CNI. 


Cloud repatriation hits its stride

Many enterprises are now confronting a stark reality. AI is expensive, not just in terms of infrastructure and operations, but in the way it consumes entire IT budgets. Training foundational models or running continuous inference pipelines takes resources of an order of magnitude greater than the average SaaS or data analytics workload. As competition in AI heats up, executives are asking tough questions: Is every app in the cloud still worth its cost? Where can we redeploy dollars to speed up our AI road map? ... Repatriation doesn’t signal the end of cloud, but rather the evolution toward a more pragmatic, hybrid model. Cloud will remain vital for elastic demand, rapid prototyping, and global scale—no on-premises solution can beat cloud when workloads spike unpredictably. But for the many applications whose requirements never change and whose performance is stable year-round, the lure of lower-cost, self-operated infrastructure is too compelling in a world where AI now absorbs so much of the IT spend. In this new landscape, IT leaders must master workload placement, matching each application to a technical requirement and a business and financial imperative. Sophisticated cost management tools are on the rise, and the next wave of cloud architects will be those as fluent in finance as they are in Kubernetes or Terraform.


6 tips for tackling technical debt

Like most everything else in business today, debt can’t successfully be managed if it’s not measured, Sharp says, adding that IT needs to get better at identifying, tracking, and measuring tech debt. “IT always has a sense of where the problems are, which closets have skeletons in them, but there’s often not a formal analysis,” he says. “I think a structured approach to looking at this could be an opportunity to think about things that weren’t considered previously. So it’s not just knowing we have problems but knowing what the issues are and understanding the impact. Visibility is really key.” ... Most organizations have some governance around their software development programs, Buniva says. But a good number of those governance programs are not as strong as they should be nor detailed enough to inform how teams should balance speed with quality — a fact that becomes more obvious with the increasing speed of AI-enabled code production. ... Like legacy tech more broadly, code debt is a fact of life and, as such, will never be completely paid down. So instead of trying to get the balance to zero, IT exec Rishi Kaushal prioritizes fixing the most problematic pieces — the ones that could cost his company the most. “You don’t want to want to focus on fixing technical debt that takes a long time and a lot of money to fix but doesn’t bring any value in fixing,” says Kaushal


AI Won’t Save You From Your Data Modeling Problems

Historically, data modeling was a business intelligence (BI) and analytics concern, focused on structuring data for dashboards and reports. However, AI applications shift this responsibility to the operational layer, where real-time decisions are made. While foundation models are incredibly smart, they can also be incredibly dumb. They have vast general knowledge but lack context and your information. They need structured and unstructured data to provide this context, or they risk hallucinating and producing unreliable outputs. ... Traditional data models were built for specific systems, relational for transactions, documents for flexibility and graphs for relationships. But AI requires all of them at once because an AI agent might talk to the transactional database first for enterprise application data, such as flight schedules from our previous example. Then, based on that response, query a document to build a prompt that uses a semantic web representation for flight-rescheduling logic. In this case, a single model format isn’t enough. This is why polyglot data modeling is key. It allows AI to work across structured and unstructured data in real time, ensuring that both knowledge retrieval and decision-making are informed by a complete view of business data.


Your password manager is under attack, and this new threat makes it worse

"Password managers are high-value targets and face constant attacks across multiple surfaces, including cloud infrastructure, client devices, and browser extensions," said NordPass PR manager Gintautas Degutis. "Attack vectors range from credential stuffing and phishing to malware-based exfiltration and supply chain risks." Googling the phrase "password manager hacked" yields a distressingly long list of incursions. Fortunately, in most of those cases, passwords and other sensitive information were sufficiently encrypted to limit the damage. ... One of the most recent and terrifying threats to make headlines came from SquareX, a company selling solutions that focus on the real-time detection and mitigation of browser-based web attacks. SquareX spends a great deal of its time obsessing over the degree to which browser extension architectures represent a potential vector of attack for hackers. ... For businesses and enterprises, the attack is predicated on one of two possible scenarios. In the first scenario, users are left to make their own decisions about what extensions are loaded onto their systems. In this case, they are putting the entire enterprise at risk. In the second scenario, someone in an IT role with the responsibility of managing the organization's approved browser and extension configurations has to be asleep at the wheel. 


Developing Software That Solves Real-World Problems – A Technologist’s View

Software architecture is not just a technical plan but a way to turn an idea into reality. A good system can model users’ behaviors and usage, expand to meet demand, secure data and combine well with other systems. It takes the concepts of distributed systems, APIs, security layers and front-end interfaces into one cohesive and easy-to-use product. I have been involved with building APIs that are crucial for the integration of multiple products to provide a consistent user experience to consumers of these products. Along with the group of architects, we played a crucial role in breaking down these complex integrations into manageable components and designing easy-to-implement API interfaces. Also, using cloud services, these APIs were designed to be highly resilient. ... One of the most important lessons I have learned as a technologist is that just because we can build something does not mean we should. While working on a project related to financing a car, we were able to collect personally identifiable information (PII). Initially, we had it stored for a long duration. However, we were unaware of the implications. When we discussed the situation with the architecture and security teams, we found out that we don’t have ownership of the data and it was very risky to store that data for a long period. We mitigated the risk by reducing the data retention period to what will be useful to users. 

Daily Tech Digest - May 26, 2024

The modern CISO: Scapegoat or value creator?

To showcase the value of their programs and demonstrate effectiveness, CISOs must establish clear communication and overcome the disconnect between the board and their team. It’s up to the CISO to ensure the board understands the level of cyber risk their organization is facing and what they need to increase the cyber resilience of their organization. Presenting cyber risk levels in monetary terms with actionable next steps is necessary to bring the board of directors on the same page and open an honest line of communication, while elevating their cybersecurity team to the role of value creator. ... CISOs are deeply wary about sharing too many details on their cybersecurity posture in the public domain, because of the unnecessary and preventable risk of exposing their organizations to cyberattacks, which are expected to cause $10.5 trillion in damages by 2025. Filing an honest 10K while preserving your organization’s cyber defenses requires a delicate balance. We’ve already seen Clorox fall victim when the balance was off. ... Given the pace at which the cybersecurity landscape is continuing to evolve, the CISO’s job is getting tougher. 


This Week in AI: OpenAI and publishers are partners of convenience

In an appearance on the “All-In” podcast, Altman said that he “definitely [doesn’t] think there will be an arms race for [training] data” because “when models get smart enough, at some point, it shouldn’t be about more data — at least not for training.” Elsewhere, he told MIT Technology Review’s James O’Donnell that he’s “optimistic” that OpenAI — and/or the broader AI industry — will “figure a way out of [needing] more and more training data.” Models aren’t that “smart” yet, leading OpenAI to reportedly experiment with synthetic training data and scour the far reaches of the web — and YouTube — for organic sources. But let’s assume they one day don’t need much additional data to improve by leaps and bounds. ... Through licensing deals, OpenAI effectively neutralizes a legal threat — at least until the courts determine how fair use applies in the context of AI training — and gets to celebrate a PR win. Publishers get much-needed capital. And the work on AI that might gravely harm those publishers continues.


Private equity looks to the CIO as value multiplier

A newer way of thinking about value creation focuses on IT, he says, because nearly every company, perhaps even the mom-and-pop coffee shop down the street, is a heavy IT user. “With this third wave, we’re seeing private equity firms retain in-house IT leadership, and that in-house IT leadership has led to more value creation,” Buccola says. “Firms with great IT leadership, a sound IT strategy, and a forward-thinking IT strategy, are creating more value.” ... “All roads lead to IT,” says Corrigan, a veteran of PE-backed firms, with World Insurance backed by Goldman Sachs and Charlesbank. “Every aspect of the business is dependent on some type of technology.” Corrigan sees CIOs being more frequently consulted when PE-back firms look to IT systems to drive operational efficiencies. In some cases, cutting costs is a quicker path to return on investment than revenue growth. “Every dollar you can cut out of the bottom line is worth several dollars of revenue generated,” he says. ... “The modern CIO in a private equity environment is no longer just a back-office role but a strategic partner capable of driving the business forward,” he says.


Sad Truth Is, Bad Tests Are the Norm!

When it comes to testing, many people seem to have the world view that hard-to-maintain tests are the norm and acceptable. In my experience, the major culprits are BDD frameworks that are based on text feature files. This is amplifying waste. The extra feature file layer in theory allows;The user to swap out the language at a later date; Allows a business person to write user stories and or acceptance criteria; Allows a business person to read the user stories and or acceptance criteria; Collaboration; Etc… You have actually added more complexity than you think, for little benefit. I am explicitly critiquing the approach of writing the extra feature file layer first, not the benefits of BDD as a concept. You test more efficiently, with better results not writing the feature file layer, such as with Smart BDD, where it’s generated by code. Here I compare the complexities and differences between Cucumber and Smart BDD. ... Culture is hugely important, I’m sure we and our bosses and senior leaders would all ultimately agree with the following:For more value, you need more feedback and less waste; For more feedback, you need more value and less waste; For less waste, you need more value and more feedback


6 Months Under the SEC’s Cybersecurity Disclosure Rules

There have been calls for regulatory harmonization. For example, the Biden-Harris Administration’s National Cybersecurity Strategy released last year calls for harmonization and streamlining of new and existing regulations to ease the burden of compliance. But in the meantime, enterprise leadership teams must operate in this complicated regulatory landscape, made only more complicated by budgetary issues. “Security budgets aren't growing for the most part. So, there's this tension between diverting resources to security versus diverting resources to compliance … on top of everything else that the CISOs have going on,” says Algeier. So, what should CISOs and enterprise leadership teams be doing as they continue to work under these SEC rules and other regulatory obligations? “CISOs should keep in mind the ability to quickly, easily, and efficiently fulfill the requirements laid out by the SEC, especially if they were to fall victim to an attack,” says Das. “This means having not only the right processes in place, but investments into tools that can ensure reporting occurs in the newly condensed timeline.”


Despite increased budgets, organizations struggle with compliance

“While regulations are driving strategy shifts and increased budgets, the talent shortage and fragmented infrastructure remain obstacles to compliance and resilience. To succeed, organizations must find the right balance between human expertise for complex situations and AI-enhanced automation tools for routine tasks. This will alleviate operational strain and ensure security professionals can focus on the parts of the job where human judgment is irreplaceable.” ... 93% of organizations report rethinking their cybersecurity strategy in the past year due to the rise of new regulations, with 58% stating they have completely reconsidered their approach. The strategy shifts are also impacting the roles of cybersecurity decision-makers, with 45% citing significant new responsibilities. 92% of organizations reported an increase in their allocated budgets. Among these organizations, a significant portion (36%) witnessed budget increases of 20% to 49%, and a notable 23% saw increases exceeding 50%. 


Fundamentals of Dimensional Data Modeling

Dimensional modeling focuses its diagramming on facts and dimensions:Facts contain crucial quantitative data to track business processes. Examples of these metrics include sales figures or number of subscriptions. Dimensions contain referential pieces of information. Examples of dimensions include customer name, price, date, or location. Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. ... Dimensional data modeling promises quick access to business insights when searching a data warehouse. Modelers provide a template to guide business conversations across various teams by selecting the business process, defining the grain, and identifying the dimensions and fact tables. Alignment in the design requires these processes, and Data Governance plays an integral role in getting there. 


Why the AI Revolution Is Being Led from Below

If shadow IT was largely defined by some teams’ use of unauthorized vendors and platforms, shadow AI is often driven by the use of AI tools like ChatGPT by individual employees and users, on their own and even surreptitiously. ... So why is that a problem? The proliferation of Shadow AI can deliver many of the same benefits as officially sanctioned AI strategies, streamlining processes, automating repetitive tasks, and enhancing productivity. Employees are mainly drawn to deploy their own AI tools for precisely these reasons — they can hand off chunks of taxing work to these invisible assistants. Some industry observers see the plus side of all this and are actively encouraging the “democratization” of AI tools. At this week’s The Financial Brand Forum 2024, Cornerstone Advisors’ Ron Shevlin made it his top recommendation: “My #1 piece of advice is ‘drive bottom-up use.’ Encourage widespread AI experimentation by your team members. Then document and share the process and output improvements as widely as possible.”


A Strategic Approach to Stopping SIM Swap Fraud

Fraudsters are cautious about their return on investment. SIM swap fraud is a high-risk endeavor, and they typically expect higher rewards. It involves the risk of physically visiting telco operator premises, obtaining genuine looking customer identification documents, using employees' mules, or bribing bank or telco staff. Their targets are mostly high-balance accounts, including both bank accounts and wallets. Over the years, we have learned that customers with substantial account balances might often share bank details and OTPs during social engineering schemes, but they typically refrain from sharing their PIN due to the perceived risk involved. Even if a small percentage of customers were to share their PIN, the risk would still be minimized, as the majority of potential victims would refrain from sharing their PIN. The fraudsters would need to compromise at three levels instead of two: data gathering, compromising the telco operator and persuading the customer. If customers detect something suspicious, they may become alert, resulting in fraudsters wasting their investments.


Complexity snarls multicloud network management

While each cloud provider does its best to make networking simple across clouds, all have very nuanced differences and varied best practices for approaching the same problem, says Ed Wood, global enterprise network lead at business advisory firm Accenture. This makes being able to create enterprise-ready, secured networks across the cloud challenging, he adds. Wasim believes that a lack of intelligent data utilization at crucial stages, from data ingestion to proactive management, further complicates the process. “The sheer scale of managing resources, coupled with the dynamic nature of cloud environments, makes it challenging to achieve optimal performance and efficiency.” Making network management even more challenging is a lack of clarity on roles and responsibilities. This can be attributed to an absence of agreement on shared responsibility models, Wasim says. As a result, stakeholders, including customers, cloud service providers, and any involved third parties, might each hold different views on responsibility and accountability regarding data compliance, controls, and cloud operations management.



Quote for the day:

"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills

Daily Tech Digest - November 18, 2023

What You Need to Know About Securing 5G Networks and Communication

IoT devices have exploded over the past several years, and this growth shows no signs of slowing down. And all of these devices have one thing in common: Remote connectivity via a public 4G or 5G network, or, increasingly, a private 5G network. This explosion of connected devices creates an expanded attack surface, since the entire network is only as secure as its weakest link. Specifically, just because a network is secure, any devices attached to it that are not secure in how they communicate or receive updates create a breach opportunity. As a result, it’s essential that every device has an identity and each identity is managed. This might sound daunting, but it’s not as complex as it seems at first – it goes back to the building blocks of PKI. Much of the security industry has a handle on running PKI for enterprise networks in their organization (think laptops, mobile devices, and so on). Therefore, security teams are also enabled to do PKI for these smart devices — it’s the same approach for a different endpoint.


To AI Hell and Back: Finding Salvation Through Empathy

Iannopollo said the guides assisting in AI Hell could come from IT, marketing, or the executive team. “All of them understand the incredible opportunity of generative AI and the unparalleled transformative power of the new technology. And they know that without adequate security, privacy, and risk governance.” According to Forrester’s research, 36% of respondents in those groups said privacy and security are the greatest barriers to generative AI adoption, while another 31% said governance and risk were the biggest hurdle. Another 61% cited concerns that GenAI could violate privacy and data protection laws like the EU’s GDPR. “So, concerns exist,” she said. “But remember, Hell is a place of confusion.” As more frameworks come online -- more regulations, there may be less confusion and the guides will help businesses assess their AI adoption. ... Once you are out of AI Hell, like Dante, your story is not complete. Dante had to first stop in purgatory. And after spending time in AI Hell dealing with the questions of risk and threats, businesses will need to figure out a compliance strategy.


Conceptual vs. Logical vs. Physical Data Modeling

“Companies need to do Data Modeling to solve a specific business problem or answer a business question,” summarized Aiken. IT and businesses need to share goals and understanding to get to a data solution. Moreover, there needs to be a common language between systems for data to flow smoothly. However, slapping together any model or a big overarching enterprise architecture will not be helpful. A data model needs to achieve a particular purpose, and getting there requires a systematic process. Aiken’s three-dimensional model evolution framework provides resources for an improved data platform. It considers the existing architecture and the evolution needed to meet business needs and validates that stakeholders and builders are on the same page. A combination of conceptual, logical, and physical data models promises meaningful and useful results, especially where business and IT need to achieve a common objective. Doing the data modeling correctly and understanding requirements frees up 20% time and money for corporations to leverage their data capabilities and get more value from them.


AI: The indispensable ally in the information age

The implementation of AI in data centers must be viewed through a dual lens: risk mitigation and knowledge preservation. As we face a generational turnover in expertise within the industry, with a significant proportion of seasoned professionals retiring, there's an urgent need to capture and transfer this wealth of knowledge. AI and machine learning algorithms, when correctly trained and utilized, can play a crucial role in bridging this knowledge gap. By learning from clean data, and benchmarking and decisions made by experienced personnel, AI systems can emulate, and eventually, enhance these expert-driven processes. This transfer of knowledge is vital not just for maintaining current operational standards, but also for paving the way for more advanced, efficient, and resilient data center architectures. Moreover, AI's potential in managing and reducing operational risks in data centers is monumental. Advanced predictive analytics can foresee and mitigate potential failures, while continuous monitoring AI systems can identify anomalies that hint at future problems, allowing for preemptive maintenance and risk aversion.


Shadowy Hack-for-Hire Group Behind Sprawling Web of Global Cyberattacks

The cybersecurity firm's exhaustive analysis of data that Reuters journalists collected showed near-conclusive links between Appin and numerous data theft incidents. These included theft of email and other data by Appin from Pakistani and Chinese government officials. SentinelOne also found evidence of Appin carrying out defacement attacks on sites associated with the Sikh religious minority community in India and of at least one request to hack into a Gmail account belonging to a Sikh individual suspected of being a terrorist. "The current state of the organization significantly differs from its status a decade ago," says Tom Hegel, principal threat researcher at SentinelLabs. "The initial entity, 'Appin,' featured in our research, no longer exists but can be regarded as the progenitor from which several present-day hack-for-hire enterprises have emerged," he says. Factors such as rebranding, employee transitions, and the widespread dissemination of skills contribute to Appin being recognized as the pioneering hack-for-hire group in India, he says. 


Security Firm COO Hacked Hospitals to Drum Up Business

According to the plea agreement, Singla on Sept. 27, 2018, knowingly transmitted a command that resulted in an unauthorized modification to the configuration template for the ASCOM phone system at Gwinnett Medical Center's Duluth hospital campus. As a result, all of the Duluth hospital's ASCOM phones that were connected to the phone system during Singla's transmission were rendered inoperable, and more than 200 ASCOM handset devices were taken offline, the court document says. Those phones were used by Duluth hospital staff, including doctors and nurses, for internal communication, including for "code blue" emergencies. The ASCOM phones were used to place calls outside of the hospital, the court document says. On that same day, Singla - without authorization - obtained information including names, birthdates and the sex of more than 300 patients from a Hologic R2 Digitizer connected to a mammogram machine at Gwinnett's Lawrenceville hospital campus, the document says. The digitizer, which was accessible through Gwinnett's virtual private network, was protected by a password. 


How to Structure and Build a Team For Long-Term Success

Leaders have to be careful not to get caught in a situation where somebody could misconstrue their kindness or attention, but being in leadership doesn't have to mean sacrificing gaining friendships. Balance being too friendly with being able to offer necessary corrections. By nature, I tend to be a people pleaser, so I must work on being tougher — especially early in relationships. After my collegiate basketball career ended, I became a high school basketball referee. I found that the whole game went smoother if I was tough in the first quarter of a game. It is important to establish a sense of control when they first hire a new team member, and then they can infuse the second, third and fourth quarters with more friendship. Leaders can have situations that test the relationships they're working to build. Let's say someone has two people on their team, and they have to decide which one gets promoted. The one who didn't get promoted might feel like the leader let them down. Leaders must maintain enough professional distance so that an employee knows it was not due to favoritism in this situation.


Data is Everybody’s Business: The Fundamentals of Data Monetization

Companies get better at data monetization by practicing it. “Rather than wait for the right set of capabilities to magically appear,” Owens says, “businesses should start engaging in monetization activities. The learning and the returns come from doing, not from talking about doing. For starters, organizations could choose one process or product to improve or a single business challenge to solve with data.” Creating data assets also means creating organizational governance so that the right people use the data in the right ways. Data assets can be monetized only after data is properly cleaned, permissioned with the right security, and made accessible to authorized users. “If you aren’t purposely managing and monetizing your data, it won’t pay off,” says Wixom. A big problem with data is that everybody is starting from scratch all the time, says Wixom. “There isn’t enough attention to accumulating knowledge and skills for the future benefit of the organization. But if you create data assets and establish enterprise capabilities to manage them properly, data can be reused limitlessly for all kinds of value-creating reasons across an organization.”


Blockchain could save AI by cracking open the black box

Blockchain is finally being unchained from crypto, and many now see its potential as a foundation of support and validation for another emerging technology -- AI. Blockchain -- and other distributed ledger technologies -- could even help solve AI's black box problem "by providing a transparent, immutable ledger to monitor model training and trace decision-making processes," according to the authors of a new report. "This gives organizations the ability to audit the data and algorithms used, enabling greater security and trust in AI systems." ... "As AI operations go mainstream -- and as people raise concerns about the technology -- leaders are recognizing the need for a more responsible AI that prioritizes data security and transparency," the survey's authors point out. "Ensuring trustworthiness and reliability of their AI tools is a top priority for businesses, and blockchain is the turnkey solution for addressing the risks that come with AI implementation." Executives have developed a greater level of understanding of blockchain. Seventy-seven percent say they fully understand blockchain and can explain the value of it to their teams -- up five percentage points over last year's survey. 


FinOps Debuts Cloud Transparency Standards

Given that the project is backed by the largest players in the multi-billion dollar cloud market, several large enterprise-level users such as Goldman Sachs and Walmart, have also backed this initiative. “We are establishing FOCUS as the cornerstone lexicon of FinOps by providing an open source, vendor-agnostic specification featuring a unified schema and language,” says Mike Fuller CTO at the FinOps Foundation. “With this release, we are paving the way for FOCUS to foster collaboration among major cloud providers, FinOps vendors, leading SaaS providers and forward-thinking FinOps enterprises to establish a unified, serviceable framework for cloud billing data, increasing trust in the data and making it easier to understand the value of cloud spend,” Fuller said in a statement. As readers would know, cloud operators provide customers with billing data providing the costs of services they use, which also includes granular details around individual product costs, and discounts, if any. Businesses use this billing data from the service providers to track their spends, forecast future costs and build their SaaS budgets.



Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz

Daily Tech Digest - September 28, 2023

What is artificial general intelligence really about?

AGI is a hypothetical intelligent agent that can accomplish the same intellectual achievements humans can. It could reason, strategize, plan, use judgment and common sense, and respond to and detect hazards or dangers. This type of artificial intelligence is much more capable than the AI that powers the cameras in our smartphones, drives autonomous vehicles, or completes the complex tasks we see performed by ChatGPT. ... AGI could change our world, advance our society, and solve many of the complex problems humanity faces, to which a solution is far beyond humans' reach. It could even identify problems humans don't even know exist. "If implemented with a view to our greatest challenges, [AGI] can bring pivotal advances in healthcare, improvements to how we address climate change, and developments in education," says Chris Lloyd-Jones, head of open innovation at Avande. ... AGI carries considerable risks, and experts have warned that advancements in AI could cause significant disruptions to humankind. But expert opinions vary on quantifying the risks AGI could pose to society.


How to avoid the 4 main pitfalls of cloud identity management

DevOps and Security teams are often at odds with each other. DevOps wants to ship applications and software as fast and efficiently as possible, while Security’s goal is to slow the process down and make sure bad actors don’t get in. At the end of the day, both sides are right – fast development is useless if it creates misconfigurations or vulnerabilities and security is ineffective if it’s shoved toward the end of the process. Historically, deploying and managing IT infrastructure was a manual process. This setup could take hours or days to configure, and required coordination across multiple teams. (And time is money!) Infrastructure as code (IaC) changes all of that and enables developers to simply write code to deploy the necessary infrastructure. This is music to DevOps ears, but creates additional challenges for security teams. IaC puts infrastructure in the hands of developers, which is great for speed but introduces some potential risks. To remedy this, organizations need to be able to find and fix misconfigurations in IaC to automate testing and policy management.


Why a DevOps approach is crucial to securing containers and Kubernetes

DevOps, which is heavily focused on automation, has significantly accelerated development and delivery processes, making the production cycle lightning fast, leaving traditional security methods lagging behind, Carpenter says. “From a security perspective, the only way we get ahead of that is if we become part of that process,” he says. “Instead of checking everything at the point it’s deployed or after deployment, applying our policies, looking for problems, we embed that into the delivery pipeline and start checking security policy in an automated fashion at the time somebody writes source code, or the time they build a container image or ship that container image, in the same way developers today are very used to, in their pipelines.” It’s “shift left security,” or taking security policies and automating them in the pipeline to unearth problems before they get to production. It has the advantage of speeding up security testing and enables security teams to keep up with the efficient DevOps teams. “The more things we can fix early, the less we have to worry about in production and the more we can find new, emerging issues, more important issues, and we can deal with higher order problems inside the security team,” he says.


Understanding Europe's Cyber Resilience Act and What It Means for You

The act is broader than a typical IoT security standard because it also applies to software that is not embedded. That is to say, it applies to the software you might use on your desktop to interact with your IoT device, rather than just applying to the software on the device itself. Since non-embedded software is where many vulnerabilities take place, this is an important change. A second important change is the requirement for five years of security updates and vulnerability reporting. Few consumers who buy an IoT device expect regular software updates and security patches for that type of time range, but both will be a requirement under the CRA. The third important point of the standard is the requirement for some sort of reporting and alerting system for vulnerabilities so that consumers can report vulnerabilities, see the status of security and software updates for devices, and be warned of any risks. The CRA also requires that manufacturers notify the European Union Agency for Cybersecurity (ENISA) of a vulnerability within 24 hours of discovery. 


Conveying The AI Revolution To The Board: The Role Of The CIO In The Era Of Generative AI

Narratives can be powerful, especially when they’re rooted in reality. By curating a list of businesses that have thrived with or invested in AI—especially those within your sector—and bringing forth their successful integration case studies, you can demonstrate not just possibilities but proven success. It conveys a simple message: If they can, so can we. ... Change, especially one as foundational as AI, can be daunting. Set up a task force to outline the stages of AI implementation, starting with pilot projects. A clear, step-by-step road map demystifies the journey from our current state to an AI-integrated future. It offers a sense of direction by detailing resource allocations, potential milestones and timelines—transforming the AI proposition from a vague idea into a concrete plan. ... In our zeal to champion AI, we mustn’t overlook the ethical considerations it brings. Draft an AI ethics charter, highlighting principles and practices to ensure responsible AI adoption. Addressing issues like data privacy, bias mitigation and the need for transparent algorithms proactively showcases a balanced, responsible approach.


Chip industry strains to meet AI-fueled demands — will smaller LLMs help?

Avivah Litan, a distinguished vice president analyst at research firm Gartner, said sooner or later the scaling of GPU chips will fail to keep up with growth in AI model sizes. “So, continuing to make models bigger and bigger is not a viable option,” she said. iDEAL Semiconductor's Burns agreed, saying, "There will be a need to develop more efficient LLMs and AI solutions, but additional GPU production is an unavoidable part of this equation." "We must also focus on energy needs," he said. "There is a need to keep up in terms of both hardware and data center energy demand. Training an LLM can represent a significant carbon footprint. So we need to see improvements in GPU production, but also in the memory and power semiconductors that must be used to design the AI server that utilizes the GPU." Earlier this month, the world’s largest chipmaker, TSMC, admitted it's facing manufacturing constraints and limited availability of GPUs for AI and HPC applications. 


NoSQL Data Modeling Mistakes that Ruin Performance

Getting your data modeling wrong is one of the easiest ways to ruin your performance. And it’s especially easy to screw this up when you’re working with NoSQL, which (ironically) tends to be used for the most performance-sensitive workloads. NoSQL data modeling might initially appear quite simple: just model your data to suit your application’s access patterns. But in practice, that’s much easier said than done. Fixing data modeling is no fun, but it’s often a necessary evil. If your data modeling is fundamentally inefficient, your performance will suffer once you scale to some tipping point that varies based on your specific workload and deployment. Even if you adopt the fastest database on the most powerful infrastructure, you won’t be able to tap its full potential unless you get your data modeling right. ... How do you address large partitions via data modeling? Basically, it’s time to rethink your primary key. The primary key determines how your data will be distributed across the cluster, which improves performance as well as resource utilization.


AI and customer care: balancing automation and agent performance

AI alone brings real challenges to delivering outstanding customer service and satisfaction. For starters, this technology must be perfect, or it can lead to misunderstandings and errors that frustrate customers. It also lacks the humanised context of empathy and understanding of every customer’s individual and unique needs. A concern we see repeatedly is whether AI will eventually replace human engagement in customer service. Despite the recent advancements in AI technology, I think we can agree it remains increasingly unlikely. Complex issues that arise daily with customers still require human assistance. While AI’s strength lies in dealing with low-touch tasks and making agents more effective and productive, at this point, more nuanced issues still demand the human touch. However, the expectation from AI shouldn’t be to replace humans. Instead, the focus should be on how AI can streamline access to live-agent support and enhance the end-to-end customer care process. 


How to Handle the 3 Most Time-Consuming Data Management Activities

In the context of data replication or migration, data integrity can be compromised, resulting in inconsistencies or discrepancies between the source and target systems. This issue is identified as the second most common challenge faced by data producers, identified by 40% of organizations, according to The State of DataOps report. Replication processes generate redundant copies of data, while migration efforts may inadvertently leave extraneous data in the source system. Consequently, this situation can lead to uncertainty regarding which data version to rely upon and can result in wasteful consumption of storage resources. ... Another factor affecting data availability is the use of multiple cloud service providers and software vendors. Each offers proprietary tools and services for data storage and processing. Organizations that heavily invest in one platform may find it challenging to switch to an alternative due to compatibility issues. Transitioning away from an ecosystem can incur substantial costs and effort for data migration, application reconfiguration, and staff retraining.


The Secret of Protecting Society Against AI: More AI?

One of the areas of greatest concern with generative AI tools is the ease with which deepfakes -- images or recordings that have been convincingly altered and manipulated to misrepresent someone -- can be generated. Whether it is highly personalized emails or texts, audio generated to match the style, pitch, cadence, and appearance of actual employees, or even video crafted to appear indistinguishable from the real thing, phishing is taking on a new face. To combat this, tools, technologies, and processes must evolve to create verifications and validations to ensure that the parties on both ends of a conversation are trusted and validated. One of the methods of creating content with AI is using generative adversarial networks (GAN). With this methodology, two processes -- one called the generator and the other called the discriminator -- work together to generate output that is almost indistinguishable from the real thing. During training and generation, the tools go back and forth between the generator creating output and the discriminator trying to guess whether it is real or synthetic. 



Quote for the day:

''You are the only one who can use your ability. It is an awesome responsibility.'' -- Zig Ziglar

Daily Tech Digest - June 26, 2022

Only 3% of Open Source Software Bugs Are Actually Attackable, Researchers Say

Making the determination of what's attackable comes by looking beyond the presence of open source dependencies with known vulnerabilities and examining how they're actually being used, says Manish Gupta, CEO of ShiftLeft. "There are many tools out there that can easily find and report on these vulnerabilities. However, there is a lot of noise in these findings," Gupta says. ... The idea of analyzing for attackability also involves assessing additional factors like whether the package that contains the CVE is loaded by the application, whether it is in use by the application, whether the package is in an attacker-controlled path, and whether it is reachable via data flows. In essence, it means taking a simplified threat modeling approach to open source vulnerabilities, with the goal of drastically cutting down on the fire drills. CISOs have already become all too familiar with these drills. When a new high-profile supply chain vulnerability like Log4Shell or Spring4Shell hits the industry back channels, then blows up into the media headlines, their teams are called to pull long days and nights figuring out where these flaws impact their application portfolios, and even longer hours in applying fixes and mitigations to minimize risk exposures.


The Power and Pitfalls of AI for US Intelligence

Depending on the presence or absence of bias and noise within massive data sets, especially in more pragmatic, real-world applications, predictive analysis has sometimes been described as “astrology for computer science.” But the same might be said of analysis performed by humans. A scholar on the subject, Stephen Marrin, writes that intelligence analysis as a discipline by humans is “merely a craft masquerading as a profession.” Analysts in the US intelligence community are trained to use structured analytic techniques, or SATs, to make them aware of their own cognitive biases, assumptions, and reasoning. SATs—which use strategies that run the gamut from checklists to matrixes that test assumptions or predict alternative futures—externalize the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in the secret competition between nation-states not all facts are known or knowable. But even SATs, when employed by humans, have come under scrutiny by experts like Chang, specifically for the lack of scientific testing that can evidence an SAT’s efficacy or logical validity.


Data Modeling and Data Models: Not Just for Database Design

The prevailing application-centric mindset has caused the fundamental problems that we have today, Bradley said, with multiple disparate copies of the same concept in system after system after system after system. Unless we replace that mindset with one that is more data-focused, the situation will continue to propagate, he said. ... Models have a wide variety of applicable uses and can present different levels of detail based on the intended user and context. Similarly, a map is a model that can be usedlike models are used in a business. Like data models, there are different levels of maps for different audiences and different purposes. A map of the counties in an election will provide a different view than a street map used for finding an address. A construction team needs a different type of detail on a map they use to connect a building to city water, and a lesson about different countries on a globe uses still another level of detail targeted to a different type of user. Similarly, some models are more focused on communication and others are used for implementation.


Microverse IDE Unveiled for Web3 Developers, Metaverse Projects

"With Microverse IDE, developers and designers collaboratively build low-latency, high-performance multiuser Microverse spaces and worlds which can then be published anywhere," the company said in a June 21 news release. As part of its Multiverse democratization effort, Croquet has open sourced its Microverse IDE Metaverse world builder and some related components under the Apache License Version 2.0 license so developers and adopters can examine, use and modify the software as needed. ... The California-based Croquet also announced the availability of its multiplane portal technology, used to securely connect independent 3D virtual worlds developed by different parties, effectively creating the Metaverse from independent microservices. These connections can even span different domains, the company said, thus providing safe, secure and decentralized interoperability among various worlds independent of the large technology platforms. "Multiplane portals solve a fundamental problem in the Metaverse with linking web-based worlds in a secure and safe way," the company said.


5 Firewall Best Practices Every Business Should Implement

Changes that impact your IT infrastructure happen every single day. You might install new applications, deploy additional network equipment, grow your user base, adopt non-traditional work practices, etc. As all this happens, your IT infrastructure’s attack surface will also evolve. Sure, you can make your firewall evolve with it. However, making changes to your firewall isn’t something you should take lightly. A simple mistake can take some services offline and disrupt critical business processes. Similarly, you could also expose ports to external access and compromise their security. Before you apply changes to your firewall, you need to have a change management plan. The plan should specify the changes you intend to implement and what you hope to achieve. ... Poorly configured firewalls can be worse than having no firewall, as a poorly installed firewall will give you a false sense of security. The same is true with firewalls without proper deployment planning or routine audits. However, many businesses are prone to these missteps, resulting in weak network security and a failed investment.


Debate over AI sentience marks a watershed moment

While it is objectively true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, subjectively this appears like self-awareness. Such self-awareness is thought to be a characteristic of artificial general intelligence (AGI). Well beyond the mostly narrow AI systems that exist today, AGI applications are supposed to replicate human consciousness and cognitive abilities. Even in the face of remarkable AI advances of the last couple of years there remains a wide divergence of opinion between those who believe AGI is only possible in the distant future and others who think this might be just around the corner. DeepMind researcher Nando de Freitas is in this latter camp. Having worked to develop the recently released Gato neural network, he believes Gato is effectively an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinement and additional computing power. The deep learning transformer model is described as a “generalist agent” that performs over 600 distinct tasks with varying modalities, observations and action specifications. 


Data Architecture Challenges

Most traditional businesses preserved data privacy by holding function-specific data in departmental silos. In that scenario, data used by one department was not available or accessible by another department. However, that caused a serious problem in the advanced analytics world, where 360-degrees customer data or enterprise marketing data are everyday necessities. Companies, irrespective of their size, type, or nature of business, soon realized that to succeed in the digital age, data had to be accessible and shareable. Then came data science, artificial intelligence (AI), and a host of related technologies that transformed businesses overnight. Today, an average business is data-centric, data-driven, and data-powered. Data is thought of as the new currency in the global economy. In this globally competitive business world, data in every form is traded and sold. For example, 360-degrees customer data, global sales data, health care data, and insurance history data are all available with a few keystrokes. A modern Data Architecture is designed to “eliminate data silos, combining data from all corners of the company along with external data sources.” 


One in every 13 incidents blamed on API insecurity – report

Lebin Cheng, vice president of API security at Imperva, commented: “The growing security risks associated with APIs correlate with the proliferation of APIs, combined with the lack of visibility that organizations have into these ecosystems. At the same time, since every API is unique, every incident will have a different attack pattern. A traditional approach to security where one simple patch addresses all vulnerabilities doesn’t work with APIs.” Cheng added: “The proliferation of APIs, combined with the lack of visibility into these ecosystems, creates opportunities for massive, and costly, data leakage.” ... By the same metric, professional services were also highly exposed to API-related problems (10%-15%) while manufacturing, transportation, and utilities (all 4-6%) are all in the mid-range. Industries such as healthcare have less than 1% of security incidents attributable to API-related security problems. Many organizations are failing to protect their APIs because it requires equal participation from the security and development teams, which have historically have been somewhat at odds. 


What Are Deep Learning Embedded Systems And Its Benefits?

Deep learning is a hot topic in machine learning, with many companies looking to implement it in their products. Here are some benefits that deep learning embedded systems can offer: Increased Efficiency and Performance: Deep learning algorithms are incredibly efficient, meaning they can achieve high-performance levels even when running on small devices. This means that deep learning embedded systems can be used to improve the performance of existing devices and platforms or to create new devices that are powerful and efficient. Reduced Size and Weight: Deep learning algorithms are often very compact and can be implemented on small devices without sacrificing too much performance or capability. This reduces the device’s size and weight, making it more portable and easier to use. Greater Flexibility: Deep learning algorithms can often exploit complex data sets to improve performance. This means deep learning embedded systems can be configured to work with various data sets and applications, giving them greater flexibility and adaptability.


State-Backed Hackers Using Ransomware as a Decoy for Cyber Espionage Attacks

The activity cluster, attributed to a hacking group dubbed Bronze Starlight by Secureworks, involves the deployment of post-intrusion ransomware such as LockFile, Atom Silo, Rook, Night Sky, Pandora, and LockBit 2.0. "The ransomware could distract incident responders from identifying the threat actors' true intent and reduce the likelihood of attributing the malicious activity to a government-sponsored Chinese threat group," the researchers said in a new report. "In each case, the ransomware targets a small number of victims over a relatively brief period of time before it ceases operations, apparently permanently." Bronze Starlight, active since mid-2021, is also tracked by Microsoft under the emerging threat cluster moniker DEV-0401, with the tech giant emphasizing its involvement in all stages of the ransomware attack cycle right from initial access to the payload deployment. ... The key victims encompass pharmaceutical companies in Brazil and the U.S., a U.S.-based media organization with offices in China and Hong Kong, electronic component designers and manufacturers in Lithuania and Japan, a law firm in the U.S., and an aerospace and defense division of an Indian conglomerate.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson