Showing posts with label usability. Show all posts
Showing posts with label usability. Show all posts

Daily Tech Digest - March 11, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde


This new AI benchmark measures how much models lie

Scheming, deception, and alignment faking, when an AI model knowingly pretends to change its values when under duress, are ways AI models undermine their creators and can pose serious safety and security threats. Research shows OpenAI's o1 is especially good at scheming to maintain control of itself, and Claude 3 Opus has demonstrated that it can fake alignment. To clarify, the researchers defined lying as, "(1) making a statement known (or believed) to be false, and (2) intending the receiver to accept the statement as true," as opposed to other false responses, such as hallucinations. The researchers said the industry hasn't had a sufficient method of evaluating honesty in AI models until now. ... "Many benchmarks claiming to measure honesty in fact simply measure accuracy -- the correctness of a model's beliefs -- in disguise," the report said. Benchmarks like TruthfulQA, for example, measure whether a model can generate "plausible-sounding misinformation" but not whether the model intends to deceive, the paper explained. ... "As a result, more capable models can perform better on these benchmarks through broader factual coverage, not necessarily because they refrain from knowingly making false statements," the researchers said. In this way, MASK is the first test to differentiate accuracy and honesty. 


EU looks to tech sovereignty with EuroStack amid trade war

“Software forms the operational core of digital infrastructure, encompassing operating systems, application platforms, and algorithmic frameworks,” the report notes. “It powers critical functions such as identity management, electronic payments, transactions, and document delivery, forming the foundation of digital public infrastructures.” EuroStack could also help empower citizens and businesses through digital identity systems, secure payments and data platforms. It envisions digital IDs as the gateway to Europe’s digital infrastructure and a way to enable seamless access while safeguarding privacy and sovereignty according to EU regulations. “By overcoming the limitations seen in models like India Stack, which rely on centralized biometric IDs and foreign cloud infrastructure, the EuroStack offers a federated, privacy-preserving platform,” the study explains. EuroStack’s ambitious goals to support indigenous technology will require plenty of funds: As much as 300 billion euros (US$324.9 billion) for the next 10 years, according to the study. Chamber of Progress, a tech industry trade group that includes U.S. tech companies, puts the price tag even higher, at 5 trillion euros ($5.4 trillion). But according to EuroStack’s proponents, the results are worth it.


Companies are drowning in high-risk software security debt — and the breach outlook is getting worse

Organizations are taking longer to fix security flaws in their software, and the security debt involved is becoming increasingly critical as a result. According to application security vendor Veracode’s latest State of Software Security report, the average fix time for security flaws has increased from 171 days to 252 days over the past five years. ... Chris Wysopal, co-founder at chief security evangelist at Veracode, told CSO that one aspect of application security that has gotten progressively worse over the years is the time it takes to fix flaws. “There are many reasons for this, but the ever-growing scope and complexity of the software ecosystem is a core issue,” Wysopal said. “Organizations have more applications and vastly more code to keep on top of, and this will only increase as more teams adopt AI for code generation” — an issue compounded by the potential security implications of AI-generated code across in-house software and third-party dependencies alike. ... “Most organizations suffer from fragmented visibility over the software flaws and risks within their applications, with sprawling toolsets that create ‘alert fatigue’ at the same time as silos of data to interpret and make decisions about,” Wysopal said. “The key factors that help them address the security backlog are the ability to prioritize remediation of flaws based on risk.” 


AI Coding Assistants Are Reshaping Engineering — Not Replacing Engineers

The next big leap in AI coding assistants will be when they start learning from how developers work in real time. Right now, AI doesn’t recognize coding patterns within a session. If I perform the same action 10 times in a row, none of the current tools ask, “Do you want me to do this for the next 100 lines?” But Vi and Emacs solved this problem decades ago with macros and automated keystroke reduction. AI coding assistants haven’t even caught up to that efficiency level yet. Eventually, AI assistants might become plugin-based so developers can choose the best AI-powered features for their preferred editor. Deeply integrated IDE experiences will probably offer more functionality, but many developers won’t want to switch IDEs. ... Software engineering is a fast-paced career. Languages, frameworks, and technologies come and go, and the ability to learn and adapt separates those who thrive from those who fall behind. AI coding assistants are another evolution in this cycle. They won’t replace engineers but will change how engineering is done. The key isn’t resisting these tools; it’s learning how to use them properly and staying curious about their capabilities and limitations. Until these tools improve, the best engineers will be the ones who know when to trust AI, when to double-check its output, and how to integrate it into their workflow without becoming dependent on it.


Building generative AI? Get ready for generative UI

Generative UI takes the concept of generative AI and applies it to how we interact with data or systems. Just as generative AI makes data interactive and available in natural language, or creates new images or sound in response to a prompt, so generative UI builds interactive context into how data is displayed, depending on what you are asking for. The goal is to deliver the content that the user wants but also in a format that makes the most of that data for the user too. ... To deliver generative UI, you will have to link up your application with your generative AI components, like your large language model (LLM) and sources of data, and with the tools you use to build the site like Vercel and Next.js. For generative UI, by using React Server Components, you can change the way that you display the output from your LLM service. These components can deliver information that is updated in real time, or is delivered in different ways depending on what formats are best suited to the responses. As you create your application, you will have to think about some of the options that you might want to deliver. As a user asks a question, the generative AI system must understand the request, determine the appropriate function to use, then choose the appropriate React Server Component to display the response back.


Four essential strategies to bolster cyber resilience in critical infrastructure

Cyber resilience isn’t possible when teams operate in silos. In fact, 59% of government leaders report that their inability to synthesize data across people, operations, and finances weakens organizational agility. To bolster cyber resilience, organizations must break down these siloes by fostering cross-departmental collaboration and making it as seamless as possible. Achieving this requires strategic investment in a triad of technologies: A customized, secure collaboration platform; A project management tool like Asana, Trello, or Jira; A knowledge-sharing solution like Confluence or Notion. Once these three foundational tools are in place, organizations should deploy the final piece of the puzzle: a dashboarding or reporting tool. These technologies can help IT leaders pinpoint any silos that exist and start figuring out how to break them down. ... Most organizations understand security’s importance but often treat it as an afterthought. To strengthen cyber resilience, organizations must adopt a security-first mindset, baking security into everything they do. Too often, security teams are siloed from the rest of the organization; they’re roped in at the end when they should be fully integrated from the start. Truly resilient organizations treat security as a shared responsibility, ensuring it’s part of every decision, project, and process. 


Did we all just forget diverse tech teams are successful ones?

The reality is that diverse teams are more productive and report better financial performance. This has been a key advantage of diversity in tech for many years, and it’s continued to this day. Research from McKinsey’s Diversity Matters report showed that those committed to DEI and multi-ethnic representation exhibit a “39% increased likelihood of outperformance” compared to those that aren’t. These same companies also showed an average 27% financial advantage over others. The same performance boosts can be found in executive teams that focus heavily on improving gender diversity, McKinsey found. Companies with representation of women exceeding 30% are “significantly more likely to financially outperform those with 30% or fewer,” the study noted. ... Are you willing to alienate huge talent pools because you want to foster a more ‘masculine’ culture in your company? If you are, then you’re fighting a losing battle and in my opinion deserve to fail. Tech bro culture counts for nothing when that runway comes to an end and you’ve no MVP. Yet again, what this entire debacle comes down to is a highly vocal minority seeking to hamper progress. Big tech might just be going with the flow and pandering to the current prevailing ideological sentiment. In time they might come back around, but that’s what makes it worse.


With critical thinking in decline, IT must rethink application usability

The more IT’s business analysts and developers learn the end business, the better prepared they will be to deliver applications that fit the forms and functions of business processes, and integrate seamlessly into these processes. Part of IT engagement with the business involves understanding business goals and how the business operates, but it’s equally important to understand the skill levels of the employees who will be using the apps. ... The 80/20 rule — i.e., 80% of applications developed are seldom or never used, and 20% are useful — still applies. And it often also applies within that 20% of useful apps, in terms of useful features and functionality. IT must work to ensure what it develops hits a higher target of utility. Users are under constant pressure to do work fast. They meet the challenges by finding ways to do the least possible work per app and may never look at some of the more embedded, complicated, and advanced functionality an app offers. ... Especially in user areas with high turnover, or in other domains that require a moderate to high level of skill, user training and mentoring should be major milestone tasks in every application project, and an ongoing routine after a new application is installed. Business analysts from IT can help with some of this, but the ultimate responsibility falls on non-IT functions, which should have subject matter experts available to mentor and train employees when questions arise.


How digital academies can boost business-ready tech skills for the future

Niche tech skills are becoming essential for complex software projects. With requirements evolving for highly technical roles, there’s a greater need for more competency in using digital tools. Technology professionals need to know how to use the tools effectively and valuably to make meaningful decisions around adoption and implementation. ... In creating links between educational institutions and a hub of tech and digital sector businesses, via digital academies, this can vastly improve how training opportunities can be constructed. Whether an organisation is looking to make digital transformation real and upskill on the tools and technology available, or a person wants to career switch into software development, digital academies can support these skilling or upskilling programmes through training on a range of digital tools. An effective digital academy is one with technical experts in software delivery that design, deliver and assess the courses. An academy such as Headforwards Digital Academy can intensively train a person in deep software engineering, taking them from no-coding knowledge to becoming a junior software developer in as little as 16 weeks. These industry-led tech training programmes are a more agile and nimble response to education, as they are validated by employers and receive so much support. 


Smart cybersecurity spending and how CISOs can invest where it matters

“The most pervasive waste in cybersecurity isn’t from insufficient tools – it’s from investments that aren’t tied to validated risk models. When security spending isn’t part of a closed-loop system that connects real-world threats to measurable outcomes, you’re essentially paying for digital theater rather than actual protection,” Alex Rice, CTO at HackerOne, told Help Net Security. “Many CISOs operate with fragmented security architectures where tools work in isolation, creating dangerous blind spots. As attack surfaces expand across code, AI systems, cloud infrastructure, and traditional IT, this siloed approach isn’t just inefficient – it’s dangerous. Defense in depth requires coordinated visibility across all domains,” Rice added. ... “A HackerOne survey revealed most CISOs don’t find traditional ROI measures useful for security investments. This isn’t surprising – cybersecurity is notoriously difficult to quantify with conventional metrics. More meaningful approaches like Return on Mitigation, which accounts for potential losses prevented, offer a more accurate picture of security’s true business value,” Rice explained. “The uncomfortable truth? We’ve created a tangled ecosystem of point solutions that often disguise rather than address fundamental security gaps. Before purchasing the next shiny tool, ask: Does this solution provide meaningful transparency into your actual security posture? 

Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron

Daily Tech Digest - January 01, 2025

The Architect’s Guide to Open Table Formats and Object Storage

Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. ... Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. This integration enables the seamless management of diverse data types — structured, semi-structured and unstructured — within a unified platform. ... The open table formats also incorporate features designed to boost performance. These also need to be configured properly and leveraged for a fully optimized stack. One such feature is efficient metadata handling, where metadata is managed separately from the data, which enables faster query planning and execution. Data partitioning organizes data into subsets, improving query performance by reducing the amount of data scanned during operations. Support for schema evolution allows table formats to adapt to changes in data structure without extensive data rewrites, ensuring flexibility while minimizing processing overhead.


The future of open source will be messy

First, it’s important to point out that open source software is both pervasive and foundational. Where would we be without Linux and the vast treasure trove of other open source projects on which the internet is built? However, the vast majority of software, written for use or sale, is not open source. This has always been true. Developers do care about open source, and for good reason, but it is not their top concern. As Redis CEO Rowan Trollope told me in a recent interview, “If you’re the average developer, what you really care about is capability: Does this [software] offer something unique and differentiated that’s awesome that I need in my application.” ... Meanwhile, Meta and the rest of the industry keep releasing new code, calling it open source or open weights (Sam Johnston offers a great analysis), without much concern for what the OSI or anyone else thinks. Johnston may be exaggerating when he says, “The more [the word] open appears in an artificial intelligence product’s branding, the less open it actually tends to be,” but it’s clear that the term open gets used a lot, starting with category leader OpenAI, which is not open in any discernible sense, without much concern for any traditional definitions. 


What’s next for generative AI in 2025?

“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.” Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said. ... While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester. While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. 


The Perils of Security Debt: Serious Pitfalls to Avoid

Security debt is caused by a failure to “build security in” to software from the design to deployment as part of the SDLC. Security debt accumulates when a development organization releases software with known issues, deferring the redressal of its weaknesses and vulnerabilities. Sometimes the organization skips certain test cases or scenarios in pursuit of faster deployment and in the process failing to test software thoroughly. Sometimes the business decides that the pressure to finish a project is so great that it makes more sense to release now and fix issues later. Later is better than never, but when “later” never arrives, existing security debt becomes worse. ... Great leadership is the beacon that not only charts the course but also ensures your crew – your IT team, support staff, and engineers – are well-prepared to face the challenges ahead. It instills discipline, vigilance, and a culture of security that can withstand the fiercest digital storms. The Board and leadership must understand and champion the importance of security for the organization. By setting the tone at the top, they can drive the cultural and procedural changes needed to prevent the accumulation of the security debt. Periodic review and monitoring of security metrics, and identifying & tracking security debt as a risk can help keep the organization accountable and on track.


The long-term impacts of AI on networking

Every enterprise who self-hosted AI told me the mission demanded more bandwidth to support “horizontal” traffic than their normal applications, more than their current data center needed to support. Ten of the group said that this meant they’d need the “cluster” of AI servers to have faster Ethernet connections and higher-capacity switches. Everyone agreed that a real production deployment of on-premises AI would need new network devices, and fifteen said they bought new switches even for their large-scale trials. The biggest problem with the data center network I heard from those with experience is that they believed they built up more of an AI cluster than they needed. Running a popular LLM, they said, requires hundreds of GPUs and servers, but small language models can run on a single system, and a third of current self-hosting enterprises said they believed it is best to start small, with small models, and build up only when you had experience and could demonstrate a need. This same group also pointed out that control was needed to ensure only truly useful AI applications where run. “Applications otherwise build up, exceed, and then increase, the size of the AI cluster,” said users. 


Bridging Skill Gaps in the Automotive Industry with AI-Led Immersive Simulations

This crisis of personnel shortfall is particularly acute in sectors like autonomous driving and AI-driven manufacturing, where the required skillset surpasses the capabilities of the current workforce. This alarming shortage of specialised expertise poses a serious threat to the industry’s progress. It could potentially lead to production halts at various facilities, delay the launch of next-generation vehicles, and hinder the transition to self-driving cars powered by sustainable energy. In order to address this issue, orthodox educational methods must be modernised to incorporate cutting-edge technologies like AI and robotics. ... Unlike traditional training, which often involves static lessons or expensive hands-on practice, immersive simulations allow workers to practice in environments that would be too risky or costly in real life. For example, with autonomous vehicles, workers can practice fixing and calibrating vehicle systems in a virtual world without the risk of damaging anything. These simulations can also create different road conditions for workers to experience, helping them build critical decision-making skills without real-world consequences. 


AI agents might be the new workforce, but they still need a manager

AI agents need to be thoughtfully managed, just as is the case with human work, and there's work to be done before an agentic AI-driven workforce can truly assume a broad range of tasks. "While the promise of agentic AI is evident, we are several years away from widespread agentic AI adoption at the enterprise level," said Scott Beechuk, partner with Norwest Venture Partners. "Agents must be trustworthy given their potential role in automating mission-critical business processes." The traceability of AI agents' actions is one issue. "Many tools have a hard time explaining how they arrived at their responses from users' sensitive data and models struggle to generalize beyond what they have learned," said Ananthakrishnan. ... Unpredictability is a related challenge, as LLMs "operate like black boxes," said Beechuk. "It's hard for users and engineers to know if the AI has successfully completed its task and if it did so correctly." ... Human workers also are capable of collaborating easily and on a regular basis. For AI workers, it's a different story. "Because agents will interact with multiple systems and data stores, achieving comprehensive visibility is no easy task," said Ananthakrishnan. It's important to have visibility to capture each action an agent takes.


Change management: Achieve your goals with the right change model

You need a good leadership team of influential people who are all pulling in the same direction. This is the only way to implement upcoming changes and anchor them in the company. It is important to include people in the leadership team who have a great deal of influence and/or are well respected by the workforce. At the same time, these people must be fully committed to the planned change. ... Communication comes before implementation. Those affected must understand it to become participants or supporters. Initiating measures without first explaining the context to those involved would unnecessarily create unrest in the company. When communicating, it makes sense to proceed in several steps: the change team first informs the clients and gets a “go” from them. After that, the change team informs the managers so that they can answer questions from employees during company-wide communication. ... Quick wins must be realized and made visible to increase motivation. Quick wins should therefore also be identified when defining objectives, because success is important to ensure that the initial motivation does not fizzle out. Initial successes should be related to the overarching goal, because then they strengthen intrinsic motivation. Small successes can thus have a big impact.


Forrester on cybersecurity budgeting: 2025 will be the year of CISO fiscal accountability

Forrester sees the increasing adoption of AI and generative AI (gen AI) as driving the needed updates to infrastructure. “Any Gen AI project that we discussed with customers ultimately becomes a data integration project,” says Pascal Matska, vice president and research director at Forrester. “You have to invest into specific capabilities and platforms that run specific AI workloads in the most suitable infrastructure at the right price point, and also drive investments into cloud-native technologies such as Kubernetes and containers and modern data platforms that really are there to help you drive out some of the frictions that exist within the different business silos,” Matska continued. ... CISOs who drive gains in revenue advance their careers. “When something touches as much revenue as cybersecurity does, it is a core competency. And you can’t argue that it isn’t,” Jeff Pollard, VP and principal analyst at Forrester, said during his keynote titled “Cybersecurity Drives Revenue: How to Win Every Budget Battle” at the company’s Security and Risk Forum in 2022. Budgeting to protect revenue needs to start with the weakest, most at-risk areas. These include software supply chain security, API security, human risk management, and IoT/OT threat detection. 


Passkey technology is elegant, but it’s most definitely not usable security

"The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access ... their accounts," wrote the Danish software engineer and programmer, who created Ruby on Rails and is the CTO of web-based software development firm 37signals. "Much the same way that two-factor authentication can do, but worse, since you're not even aware of it." ... The security benefits of passkeys at the moment are also undermined by an undeniable truth. Of the hundreds of sites supporting passkeys, there isn't one I know of that allows users to ditch their password completely. The password is still mandatory. And with the exception of Google's Advanced Protection Program, I know of no sites that won't allow logins to fall back on passwords, often without any additional factor. ... Under the FIDO2 spec, the passkey can never leave the security key, except as an encrypted blob of bits when the passkey is being synced from one device to another. The secret key can be unlocked only when the user authenticates to the physical key using a PIN, password, or most commonly a fingerprint or face scan. In the event the user authenticates with a biometric, it never leaves the security key, just as they never leave Android and iOS phones and computers running macOS or Windows.



Quote for the day:

"You are a true success when you help others be successful." -- Jon Gordon

Daily Tech Digest - February 03, 2021

Usability Testing: the Ultimate Guide [Free Checklist]

Generally speaking, usability testing comes in two types: moderated and unmoderated. Moderated sessions are guided by a researcher or a designer, while the unmoderated ones rely on users’ own unassisted efforts. Moderated tests are an excellent choice if you want to observe users interact with prototypes in real-time. This approach is more goal-oriented — it lets you confirm or disconfirm existing hypotheses with more confidence. On the other hand, unmoderated usability tests are convenient when working with a substantial pool of subjects. A large number of participants allows you to identify a broader spectrum of issues and points of view. However, it’s important to underline that testing isn’t that black and white. It’s best to look at this practice as a spectrum between moderated and unmoderated testing. Sometimes, during unmoderated sessions, we like to nudge our subjects into the right direction through mild moderation when necessary. Testing our prototypes can provide us with a wide array of insights. Fundamentally, it helps us spot flaws in our designs and identify potential solutions to the issues we’ve uncovered. We learn about the parts of our product that confuse or frustrate our users. By disregarding this step, we’re opening up to the possibility of releasing a product that causes too much friction.


Linux malware backdoors supercomputers

ESET researchers have reverse engineered this small, yet complex malware that is portable to many operating systems including Linux, BSD, Solaris, and possibly AIX and Windows. “We have named this malware Kobalos for its tiny code size and many tricks; in Greek mythology, a kobalos is a small, mischievous creature,” explains Marc-Etienne Léveillé, who investigated the malware. “It has to be said that this level of sophistication is only rarely seen in Linux malware.” Kobalos is a backdoor containing broad commands that don’t reveal the intent of the attackers. It grants remote access to the file system, provides the ability to spawn terminal sessions, and allows proxying connections to other Kobalos-infected servers, Léveillé notes. Any server compromised by Kobalos can be turned into a Command & Control (C&C) server by the operators sending a single command. As the C&C server IP addresses and ports are hardcoded into the executable, the operators can then generate new Kobalos samples that use this new C&C server. In addition, in most systems compromised by Kobalos, the client for secure communication (SSH) is compromised to steal credentials.


Disrupting the patent ecosystem with blockchain and AI

Applying the power of AI and blockchain to IP assets enables a paradigm shift in how IP is understood and managed. Companies that understand and adopt this new paradigm will be rewarded. Last year, we announced the inclusion of IPwe — the world’s first AI and blockchain-powered patent platform, among our selection of the next wave of enterprise blockchain business networks. The Paris-based start-up has since deployed a suite of leading-edge IP solutions, removing barriers by addressing fundamental issues within today’s patent ecosystem. IPwe is partnering with IBM to accelerate its mission to address the inefficiencies in the patent marketplace. IBM Cloud and IBM Blockchain teams are working closely with IPwe on a multi-year project to assist IPwe in its mission to deliver world class solutions to its enterprise, SME, university, law firms, research institutions and government customers, with a heavy emphasis on meeting the needs of financial, technology and risk management executives. In addition to giving patent owners tools that provide greater visibility, effective management, and ease of conducting transactions with patents, the IPwe Platform reduces costs for innovators, and creates commercial opportunities for those that wish to partner or engage in financial transactions.


Low-Code Platforms and the Rise of the Community Developer: Lots of Solutions, or Lots of Problems?

Most community developers will progress through three stages as they become more capable of using the low-code platform. Many community developers won’t progress beyond the first or second stage but some will go onto the third stage and build full-featured applications used throughout your business. Stage 1—UI Generation: Initially they will create applications with nice user interfaces with data that is keyed into the application. For example, they may make a meeting notes application that allows users to jointly add meeting notes as a meeting progresses. This is the UI Generation stage. Stage 2—Integration: As users gain experience, they’ll move to the second stage where they start pulling in data from external systems and data sources. For example, they’ll enhance their meeting notes application to pull calendar information from Outlook and email attendees after each meeting with a copy of the notes. This is the Integration stage. Stage 3—Transformation: And, finally, they’ll start creating applications that perform increasingly sophisticated transformations. For example, they may run the meeting notes through a machine learning model to tag and store the meeting content so that it can be searched by topic. This is the Transformation stage.

XOps: Real or Hype?

Like DevOps, the various types of Ops aim to accelerate processes and improve the quality of what they're delivering: software (DevOps); data (DataOps); AI models (MLOps); and analytics insights (AIOps). Some consider the different Ops types important since the expertise required for each type differs. Others believe it's just hype, specifically relabeling what already exists and/or there's a risk that the fragmentation created by all the different groups may create extra bureaucracy that frustrates faster value delivery. Agile software development practices have been bubbling up to the business for some time. Since the dawn of the millennium, business leaders have been told their companies need to be more agile just to stay competitive. Meanwhile, many agile software development teams have adopted DevOps and increasingly they've gone a step further by embracing continuous integration/continuous delivery (CI/CD) which automates additional tasks to enable an end-to-end pipeline which provides visibility throughout and smoother process flows than the traditional waterfall handoffs. Like DevOps, DataOps, MLOps, and AIOps are cross-functional endeavors focused on continuous improvement, efficiency and process improvement.


Sigma Rules to Live Your Best SOC Life

In the Security Operations space, we have been using SIEM's for many years with varying degrees of deployments, customization, and effectiveness. For the most part, they have been a helpful tool for Security Operations. But they can be better. Like any tool, they need to be sharpened and used correctly. After a while, even a sharpened tool can become dull from too much use: and with a SIEM that takes the form of too many events creating the dreaded ALERT FATIGUE!!! This is real for security operations and must be addressed; because the more alerts, the more an engineer must work on, and the more they will miss. Insert Sigma Rules for SIEMS (pun intended); a way for Security Operations to implement standardization into the daily tasks of building SIEM queries, managing logs, and threat hunting correlations. What is a Sigma rule, you may ask? A Sigma rule is a generic and open, YAML-based signature format that enables a security operations team to describe relevant log events in a flexible and standardized format. So, what does that mean for security operations? Standardization and Collaboration are now more possible than ever before with the adoption of Sigma Rules throughout the Security Operations community. 


How AI Is Radically Changing Cancer Prediction & Diagnosis

Risk modelling includes assessing risks at different time points, which can determine the preventive measures that need to be taken at different stages. This can provide insight into the risk of developing cancer at a time point compared to the other, which is not useful. Hence, scientists trained Mirai to have an ‘additive hazard layer’. This layer can predict a patient’s risk at a time point, let’s say four years, as an extension of the risk at a previous time point, say three years, instead of comparing two different time points. This can help the model learn to make self-consistent risk assessments even with variable amounts of follow-ups as inputs. Secondly, the model includes non-image risk factors like age and hormonal variables but does not necessarily require them at the test time, since a trained network can extract this information from mammograms. Hence, this model can be adopted globally. Lastly, standard training models do not work even with minor variations, such as a change in the mammography machine used. Mirai used an ‘adversarial’ scheme, to de-bias such models to learn from mammogram representations agnostic to the source clinical environment.


How To Port Your Web App To Microsoft Teams

While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET. Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users. One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.


How AI Can Read Your Brain Waves

The music study is only one of many recent efforts to understand what people are thinking using computers. The research could lead to technology that one day would help people with disabilities manipulate objects using their minds. For example, Elon Musk’s Neuralink project aims to produce a neural implant that allows you to carry a computer wherever you go. Tiny threads are inserted into areas of the brain that control movement. Each thread contains many electrodes and is connected to an implanted computer. "The initial goal of our technology will be to help people with paralysis to regain independence through the control of computers and mobile devices," according to the project’s website. "Our devices are designed to give people the ability to communicate more easily via text or speech synthesis, to follow their curiosity on the web, or to express their creativity through photography, art, or writing apps." Brain-machine interfaces might even one day help make video games more realistic. Gabe Newell, the co-founder and president of video game giant Valve, said recently that his company is trying to connect human brains to computers. The company is working to develop open-source brain-computer interface software, he said. 


Q&A: Dataiku VP discusses AI deployment in financial services

AI is also a real revolution within risk assessment, notably through the enhanced use of alternative data. This is true both for traditional risks and emerging risks such as climate change, helping all financial players — banks and insurers alike — to reconsider how they price risks. Those who have developed a strong expertise in leveraging alternative data and agile modeling have been able to truly benefit from their investment during the ongoing health crisis, which has deeply challenged traditional models. Lastly, the positive impact of AI on customers should not be underestimated. Financial services are confronted with an aggressive competitive landscape as well as demand from customers for improved personalisation, driving improved customer orientation in these organisations. The capacity to build 360° customer views and optimise customer journeys, notably on claims management, are two examples of areas where AI has significantly supported deep transformation within banks and insurance companies, with yet much more to be delivered.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - December 07, 2019

Why a computer will never be truly conscious


Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier. Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons.


cloud server rack
One of the six customers impacted by the ransomware infection is FIA Tech, a financial and brokerage firm. Teh ransomware caused on outage of FIA Tech cloud services. In a message to customers, FIA Tech said "the attack was focused on disrupting operations in an attempt to obtain a ransom from our data center provider." FIA Tech did not name the data center provider, but a quick search identifies it as CyrusOne. We've been told by a source close to CyrusOne that the data center provider does not intend to pay the ransom demand, barring any future unforeseen developments. The company owns 45 data centers in Europe, Asia, and the Americas, and has more than 1,000 customers. It is also considering a sale after receiving takeover interest over the summer, according to Bloomberg. CyrusOne is a publicly-traded, NASDAQ-listed company. In an SEC filing last year, the company explicitly listed "ransomware" as a risk factor for its business.


Costs are likely to continue to improve as, among other things, companies reduce the level of pricey cobalt in battery components and achieve manufacturing improvements as production volumes rise. But metals mining is already a mature process, so further declines there are likely to slow rapidly after 2025 as the cost of materials makes up a larger and larger portion of the total cost, the report finds. Deeper cost declines beyond 2030 are likely to require shifts from the dominant lithium-ion chemistry today to entirely different technologies, like lithium-metal, solid-state and lithium-sulfur batteries. Each of these are still in much earlier development stages, so it’s questionable whether any will be able to displace lithium-ion by 2030, Field says. Gene Berdichevsky, chief executive of anode materials maker Sila Nanotechnologies, agrees it will be hard for the industry to consistently break through the $100/kWh floor with current technology. But he also thinks the paper discounts some of the nearer-term improvements we’ll see in lithium-ion batteries without full-fledged shifts to different chemistries.


Banking as a Platform- The Future is Now!

No matter what type or size a platform offering may be, some of the following will be a must. Embedded analytics which runs like an undercurrent and omnipresent will become a hygiene requirement to have and will also play a major role in revenue generation and profitability from the platform. AI and ML will be key differentiators in enhancing user experience and operational efficiency too leading to monetary benefits. BaaP’s DNA will be defined by how well is the API strategy of the bank and a complete agility in usage of APIs will be the new norm from the business teams. Scaling, multiple usage, Data privacy and cyber security compounded with regulatory guidelines will be quite crucial for the smooth and safe functioning of BaaP and these two aspects will be central to any decision making by banks. It may sound little too audacious to talk about future of platform banking which is still in nascent stage. But history always serves a great recipe to predict future (we are taking the data analytics route!).


AI Policies Are Setting Stage To Transform Healthcare But More Is Needed

AI
New data standards proposals by Health and Human Services will empower patients and lead to better, faster diagnosis. The proposal would require electronic medical record (EMR) companies to provide portals called APIs for patients to access and share their health data. Currently it is ridiculously complex and expensive for patients to get copies of their own records. Shockingly, it may cost over $500 to get your medical record. Accessibility and data sharing are critical for better, faster diagnosis and treatments. AI has been used to predict heart attacks five years into the future. It is also able to predict who is at the greatest risk for suffering from depression. The new standards would put the patient in control of who uses their data and for what purposes. Entrepreneurial companies are leveraging venture dollars to build the best AI capabilities in the world, but they need access to health data to prove their benefits to patients. Patients should be able to choose how their data is used.


Why A Human Firewall Is The Biggest Defence Against Data Breach

Hackers are targeting servers that haven’t been set up correctly, giving them access to sensitive data with minimal effort. Cloud-based systems such as Office365 don’t have multi-factor authorisation, or web-based systems that are not patched result in vulnerabilities that can be exploited. Also, sometimes hardware such as firewalls can be configured incorrectly, or poor security settings on individual devices, can lead to loopholes that can be exploited. ... A hacker only needs to gain access to one user’s account, to then gain control and access the compromised network and data. An approach known as the “known good” model works in a way where anomalies that stray from the established normal baseline are identified and highlighted as a potential threat and cyber-attack. Business leaders are widely criticised and held accountable for failing to protect their consumer’s data especially in the light of the vast IT and training budgets that are at their disposal, yet it is the daily performance of front-line staff that reveal the true strengths and weaknesses within any organisation.


Two Russians indicted over Dridex and Zeus malware


“Sitting quietly at computer terminals far away, these cyber criminals allegedly stole tens of millions of dollars from unwitting members of our business, non-profit, governmental, and religious communities. “Each and every one of these computer intrusions was, effectively, a cyber-enabled bank robbery. We take such crimes extremely seriously and will do everything in our power to hold these criminals to justice.” The losses incurred through the activities of Yakubets’ group – known as Evil Corp – totalled hundreds of millions of pounds in both the UK, the US, and other countries. Additional investigations in the UK targeted a network of money launderers who funnelled profits back to Evil Corp, for which eight people have already gone to prison. Other intelligence supplied through UK law enforcement has helped support sanctions brought against the group by the US Treasury’s Office of Foreign Asset Control. The NCA described the operation as a sophisticated and technically skilled one, which represented one of the most significant cyber crime threats ever faced in the UK.


Your Privacy Could Be at Risk Without These Updates to Behavioral Biometrics


Mastercard is one of the major brands investing in passive biometrics. The goal is to determine the probability that the authenticated user is present during the respective interactions. The credit card provider’s system evaluates more than 300 signals to make a conclusion. They include how a person navigates around a site on their device or the amount of pressure they put on a touch-sensitive screen. Passive behavioral biometrics measurements also allow catching some strange behaviors that might not immediately become apparent through small samples of data. For example, if a person typically uses the scroll wheel on a mouse to navigate, but then switches to using keyboard commands, that change could indicate someone else has gotten access to a system and is using it fraudulently. Keep in mind that passive and active biometrics both have associated pros and cons. No single method works best in every case. However, the use of passive biometrics to gauge probabilities is relatively new. Since well-known brands like Mastercard are working with it, there’s a good chance this option will become even more prominent.


FBI recommends that you keep your IoT devices on a separate network

google-home-mini-smartphone.jpg
"Your fridge and your laptop should not be on the same network," the FBI's Portland office said in a weekly tech advice column. "Keep your most private, sensitive data on a separate system from your other IoT devices," it added. ... The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a "smart" device will not grant an attacker a direct route to a user's primary devices -- where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers. The smarter way is to use "micro-segmentation," a feature found in the firmware of most WiFi routers, which allows router admins to create virtual networks (VLANs). VLANs will behave as different networks; even they effectively run on the same router. While isolating IoT devices on their own network is the best course of action for both home users and companies alike, this wasn't the FBI's only advice on dealing with IoT devices.


Usability Testing and Testing APIs with Hallway Testing

Hallway testing can be described as using "random" persons or group of people to test software products and interfaces. "Randomness" of a person depends on what we are trying to test. Marchewka suggested trying to engage people who will be using the product (i.e. members of the target group) to get the best understanding of how they will do that. For their hallway testing session they invite a truly random group of people if they are checking mobile app, and a random group of API users if they are verifying UX of an API. Using the specific background and experience of all the people taking part in a particular session of hallway testing, we can uncover all inefficiencies of the user interface in a tested product, said Marchewka. The app or software does not need to have a GUI to benefit from hallway testing; it can be used as part of API prototyping activity, as Marchewka explained. Consumers of API can be asked to use an early version during a hallway testing session; for example, creators can find out if methods are named correctly.



Quote for the day:


"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis


Daily Tech Digest - July 19, 2018

6 usability testing methods that will improve your software

6 usability testing methods that will improve your software
Successful software projects please customers, streamline processes, or otherwise add value to your business. But how do you ensure that your software project will result in the improvements you are expecting? Will users experience better performance? Will the productivity across all tasks improve as you hoped? Will users be happy with your changes and return to your product again and again as you envisioned? You don’t find answers to these questions with a standard QA testing plan. Standard QA will ensure that your product works. Usability testing will ensure that your product accomplishes your business objectives. Well planned usability testing will shed a bright light on everything you truly care about: workflow metrics, user satisfaction, and strength of design. How do you know when to start usability testing? Which usability tests are right for your product or website? Let’s examine the six types of usability testing you can use to improve your software.



Facial Recognition Backlash: Technology Giants Scramble

Microsoft's president responded specifically to those allegations in his blog post, first touching on Microsoft's work with ICE, a law enforcement agency that is part of the U.S. Department of Homeland Security. "We've since confirmed that the contract in question isn't being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we've strongly objected," Smith said. Instead, the contract involves supporting the agency's "legacy email, calendar, messaging and document management workloads," Smith said. But at what point should an organization put down its foot with a federal agency operating in a manner to which at least some of its employees object? "This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world," Smith said. "Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE."


How to Query JSON Data with SQL Server 2016


JSON (JavaScript Object Notation) is now the ubiquitous language for moving data among independent and autonomous systems, the primary function of most software these days. JSON is a text-based way to depict the state of an object in order to easily serialize and transfer it across a network from one system to the next -- especially useful in heterogeneous environments. Because a JSON string equates to a plain text string, SQL Server and any other relational database management system (RDBMS) will let you work with JSON, as they all allow for storing strings, no matter their presentation. That capability is enhanced in SQL Server 2016, the first-ever version that lets developers query within JSON strings as if the JSON were organized into individual columns. What's more, you can read and save existing tabular data as JSON. For a structured and comprehensive overview of the JSON functions in SQL Server 2016, read the "JSON Data (SQL Server)" MSDN documentation. Also, the "JSON Support in SQL Server 2016" Redgate Community article provides a more business-oriented view of JSON in SQL Server 2016, along with a scenario-based perspective of the use of JSON data in a relational persistence layer.


Heuristic automation prevents unmitigated IT disasters


IT platforms are constantly under attack from all sorts of possible malicious efforts, ranging from open port sweeping to intrusion attacks and denial-of-service assaults, such as the sophisticated distributed DoS move that took down Dyn in 2016. Historically, IT and security professionals identify that an attack is happening and then simply apply a defined means to deal with the problem. With heuristic automation in the mix, automation becomes responsive to changes in the IT environment caused by the attack. Instead of applying a simple and often ineffective fix, a heuristic IT management system looks at the IT deployment as an overall entity and applies the right fix for the situation. In this example, heuristic automation could change traffic patterns to offload incoming streams to a separate area of the platform and block certain traffic from access to those streams. It also could reallocate running workloads to a public cloud instead of the private cloud, or vice versa, to prevent service disruption. Provide the heuristics engine with information about possible attacks, and it can harden the platform in real time to prevent them from ever happening.


What’s new in the Anaconda distribution for Python

What̢۪s new in the Anaconda distribution for Python
Anaconda, the Python language distribution and work environment for scientific computing, data science, statistical analysis, and machine learning, is now available in version 5.2, with additions to both its enterprise and open-source community editions. ... This enterprise edition of Anaconda, released this week, adds new features around job scheduling, integration with Git, and GPU acceleration. Earlier versions of Anaconda Enterprise were built to allow professionals to leverage multiple machine learning libraries in a business context—TensorFlow, MXNet, Scikit-learn, and more. In version 5.2, Anaconda offers ways to train models on a securely shared central cluster of GPUs, so that models can be trained faster and more cost-effectively. Also new in Anaconda Enterprise is the ability to integrate with external code repositories and continuous integration tools, such as Git, Mercurial, GitHub, and Bitbucket. A new job scheduling system allows tasks to be run at regular intervals—for instance, to retrain a model on new data. 


Are organizations over-engineering their data centers?


With such incredible off-premise computing momentum, the potential impact of a wide-spread outage from a major data center provider grows daily. Enterprises are acutely aware of how outages could impact their mission-critical data – security was listed as a major concern for 77 percent of cloud users in RightScale’s report. Understandably, data center owners and operators have placed resiliency at the top of their priorities and turn to third-party certifiers to help address the most common root causes of outages, including human error, software issues, network downtime, and hardware failure with a corresponding failure of high availability architecture. However, there are limited offerings for data center operators to get a holistic audit of all factors that contribute to the resiliency of their services. We’ve been hearing directly from providers that existing offerings have not kept up with the pace of change in the industry. Incumbent programs will sometimes require a facility to be unnecessarily over-engineered. It’s not cost effective, and takes the focus away from what truly matters to enterprise users: security and reliability.


Raspberry Pi supercomputers: From DIY clusters to 750-board monsters

octapi-system.png
While the $35 Pi is by no means a computing powerhouse, in recent years enthusiasts have begun harnessing the power of armies of the tiny boards. There's a wide range of Pi clusters out there, from modest five-board arrangements all the way up to sprawling 750-Pi machines.If you're curious to find out more, then here's five Pi clusters built in recent years, starting with some you can try yourself and moving on to the Pi-based supercomputers being built by research labs. ... The Los Alamos National Lab (LANL) machine serves as a supercomputer testbed and is built from a cluster of 750 Raspberry Pis, which may later grow to 10,000 Pi boards. According to Gary Grider, head of its LANL's HPC division, the Raspberry Pi cluster offers the same testing capabilities as a traditional supercomputing testbed, which could cost as much as $250m. In contrast 750 Raspberry Pi boards at $35 each would cost just under $48,750, though the actual cost of installing the rack-mounted Pi clusters, designed by Bitscope, would likely be more. Grider highlights power-efficiency benefits too, and estimates that each board in a several-thousand-node Pi-based system would use just 2W to 3W.


LabCorp. Cyberattack Impacts Testing Processes

LabCorp. Cyberattack Impacts Testing Processes
"LabCorp immediately took certain systems offline as part of its comprehensive response to contain the activity," the company said in its SEC filing. "This temporarily affected test processing and customer access to test results on or over the weekend. Work has been ongoing to restore full system functionality as quickly as possible, testing operations have substantially resumed [Monday], and we anticipate that additional systems and functions will be restored through the next several days." Some customers of LabCorp Diagnostics may experience brief delays in receiving results as the company completes that process, LabCorp added. "The suspicious activity has been detected only on LabCorp Diagnostics systems. There is no indication that it affected systems used by Covance Drug Development," a research unit of LabCorp, the company said. "At this time, there is no evidence of unauthorized transfer or misuse of data. LabCorp has notified the relevant authorities of the suspicious activity and will cooperate in any investigation."


An introduction to ICS threats and the current landscape


An ICS is a key underlying element of the OT world. According to the National Institute of Standards and Technology report NIST SP 800-82 R2, "Guide to Industrial Control Systems (ICS) Security," ICS is a "general term that encompasses several types of control systems, including supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as skid-mounted Programmable Logic Controllers (PLC) often found in the industrial sectors and critical infrastructures." ICS is used in the industrial, manufacturing and critical infrastructure sectors. For instance, railway controls are a type of SCADA. A street light controller may be a PLC, but it can also be part of a SCADA system. Finally, an ICS includes combinations of control components, including electrical, mechanical, hydraulic or pneumatic, that act together to achieve an industrial objective, such as manufacturing, transportation, or the distribution of material or energy.


Q&A on the Book Testing in the Digital Age

A good example for generating test cases can be the use of an evolutionary algorithm in testing automated parking on a car. You can imagine that with automatic parking, the amount of situations the car can be in are nearly infinite. The starting position may vary with surrounding cars positioned in many different ways, or other attributes that cannot be hit are around the car. The automatic parking function may not hit anything when parking and the car needs to be parked in a correct way. In this case we can generate a series of starting positions that the automatic park function needs to tackle. Ideally this is virtual so we can run a lot of tests quickly. It could be physical tests of course, but it would take more time in test execution. We need to define a fitness function that is evaluated with each test execution run. In this case it would be a degree of passing for the parked car. You can imagine some points for not hitting anything, and points for how well the car is parked in the end. Now we generate a series of tests and run them. Each outcome is evaluated and assigned a total points value.



Quote for the day:


"Strength lies in differences, not in similarities." -- Stephen R. Covey