Showing posts with label ERP. Show all posts
Showing posts with label ERP. Show all posts

Daily Tech Digest - August 24, 2025


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Creating the ‘AI native’ generation: The role of digital skills in education

Boosting AI skills has the potential to drive economic growth and productivity and create jobs, but ambition must be matched with effective delivery. We must ensure AI is integrated into education in a way that encourages students to maintain critical thinking skills, skeptically assess AI outputs, and use it responsibly and ethically. Education should also inspire future tech talent and prepare them for the workplace. ... AI fluency is only one part of the picture. Amid a global skills gap, we also need to capture the imaginations of young people to work in tech. To achieve this, AI and technology education must be accessible, meaningful, and aspirational. That requires coordinated action from schools, industry, and government to promote the real-world impact of digital skills and create clearer, more inspiring pathways into tech careers and expose students how AI is applied in various professions. Early exposure to AI can do far more than build fluency, it can spark curiosity, confidence and career ambition towards high-value sectors like data science, engineering and cybersecurity—areas where the UK must lead. ... Students who learn how to use AI now will build the competencies that industries want and need for years to come. But this will form the first stage of a broader AI learning arc where learning and upskilling become a lifelong mindset, not a single milestone. 


What is the State of SIEM?

In addition to high deployment costs, many organizations grapple with implementing SIEM. A primary challenge is SIEM configuration -- given that the average organization has more than 100 different data sources that must plug into the platform, according to an IDC report. It can be daunting for network staff to do the following when deploying SIEM: Choose which data sources to integrate; Set up SIEM correlation rules that define what will be classified as a security event; and Determine the alert thresholds for specific data and activities. It's equally challenging to manage the information and alerts a SIEM platform issues. If you fine-tune too much, the result might be false positives as the system triggers alarms about events that aren't actually threats. This is a time-stealer for network techs and can lead to staff fatigue and frustration. In contrast, if the calibration is too liberal, organizations run the risk of overlooking something that could be vital. Network staff must also coordinate with other areas of IT and the company. For example, what if data safekeeping and compliance regulations change? Does this change SIEM rule sets? What if the IT applications group rolls out new systems that must be attached to SIEM? Can the legal department or auditors tell you how long to store and retain data for eDiscovery or for disaster backup and recovery? And which data noise can you discard as waste?


AI Data Centers: A Popular Term That’s Hard to Define

The tricky thing about trying to define AI data centers based on characteristics like those described above is that none of those features is unique to AI data centers. For example, hyperscale data centers – meaning very large facilities capable of accommodating more than a hundred thousand servers in some cases – existed before modern AI debuted. AI has made large-scale data centers more important because AI workloads require vast infrastructures, but it’s not as if no one was building large data centers before AI rose to prominence. Likewise, it has long been possible to deploy GPU-equipped servers in data centers. ... Likewise, advanced cooling systems and innovative approaches to data center power management are not unique to the age of generative AI. They, too, predated AI data centers. ... Arguably, an AI data center is ultimately defined by what it does (hosting AI workloads) more than by how it does it. So, before getting hung up on the idea that AI requires investment in a new generation of data centers, it’s perhaps healthier to think about how to leverage the data centers already in existence to support AI workloads. That perspective will help the industry avoid the risk of overinvesting in new data centers designed specifically for AI – and as a bonus, it may save money by allowing businesses to repurpose the data centers they already own to meet their AI needs as well.


Password Managers Vulnerable to Data Theft via Clickjacking

Tóth showed how an attacker can use DOM-based extension clickjacking and the autofill functionality of password managers to exfiltrate sensitive data stored by these applications, including personal data, usernames and passwords, passkeys, and payment card information. The attacks demonstrated by the researcher require 0-5 clicks from the victim, with a majority requiring only one click on a harmless-looking element on the page. The single-click attacks often involved exploitation of XSS or other vulnerabilities. DOM, or Document Object Model, is an object tree created by the browser when it loads an HTML or XML web page. ... Tóth’s attack involves a malicious script that manipulates user interface elements injected by browser extensions into the DOM. “The principle is that a browser extension injects elements into the DOM, which an attacker can then make invisible using JavaScript,” he explained. According to the researcher, some of the vendors have patched the vulnerabilities, but fixes have not been released for Bitwarden, 1Password, iCloud Passwords, Enpass, LastPass, and LogMeOnce. SecurityWeek has reached out to these companies for comment. Bitwarden said a fix for the vulnerability is being rolled out this week with version 2025.8.0. LogMeOnce said it’s aware of the findings and its team is actively working on resolving the issue through a security update.


Iskraemeco India CEO: ERP, AI, and the future of utility leadership

We see a clear convergence ahead, where ERP systems like Infor’s will increasingly integrate with edge AI, embedded IoT, and low-code automation to create intelligent, responsive operations. This is especially relevant in utility scenarios where time-sensitive data must drive immediate action. For instance, our smart kits – equipped with sensor technology – are being designed to detect outages in real time and pinpoint exact failure points, such as which pole needs service during a natural disaster. This type of capability, powered by embedded IoT and edge computing, enables decisions to be made closer to the source, reducing downtime and response lag.  ... One of the most important lessons we've learned is that success in complex ERP deployments is less about customisation and more about alignment, across leadership, teams, and technology. In our case, resisting the urge to modify the system and instead adopting Infor’s best-practice frameworks was key. It allowed us to stay focused, move faster, and ensure long-term stability across all modules. In a multi-stakeholder environment – where regulatory bodies, internal departments, and technology partners are all involved – clarity of direction from leadership made all the difference. When the expectation is clear that we align to the system, and not the other way around, it simplifies everything from compliance to team onboarding.


Experts Concerned by Signs of AI Bubble

"There's a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears," Kai Wu, founder and chief investment officer of Sparkline Capital, told the Wall Street Journal last year. There are even doubters inside the industry. In July, recently ousted CEO of AI company Stability AI Emad Mostaque told banking analysts that "I think this will be the biggest bubble of all time." "I call it the 'dot AI’ bubble, and it hasn’t even started yet," he added at the time. Just last week, Jeffrey Gundlach, billionaire CEO of DoubleLine Capital, also compared the AI craze to the dot com bubble. "This feels a lot like 1999," he said during an X Spaces broadcast last week, as quoted by Business Insider. "My impression is that investors are presently enjoying the double-top of the most extreme speculative bubble in US financial history," Hussman Investment Trust president John Hussman wrote in a research note. In short, with so many people ringing the alarm bells, there could well be cause for concern. And the consequences of an AI bubble bursting could be devastating. ... While Nvidia would survive such a debacle, the "ones that are likely to bear the brunt of the correction are the providers of generative AI services who are raising money on the promise of selling their services for $20/user/month," he argued.


OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private. “As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper. ... The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements.  ... The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.


How to remember everything

MyMind is a clutter-free bookmarking and knowledge-capture app without folders or manual content organization.There are no templates, manual customizations, or collaboration tools. Instead, MyMind recognizes and formats the content type elegantly. For example, songs, movies, books, and recipes are displayed differently based on MyMind’s detection, regardless of the source, as are pictures and videos. MyMind uses AI to auto-tag everything and allows custom tags. Every word, including those in pictures, is indexed. You can take pictures of information, upload them to MyMind, and find them later by searching a word or two found in the picture. Copying a sentence or paragraph from an article will display the quote with a source link. Every data chunk is captured in a “card.” ... Alongside AI-enabled lifelogging tools like MyMind, we’re also entering an era of lifelogging hardware devices. One promising direction comes from a startup called Brilliant Labs. Its new $299 Halo glasses, available for pre-order and shipping in November, are lightweight AI glasses. The glasses have a long list of features — bone conduction sound, a camera, light weight, etc. — but the lifelogging enabler is an “agentic memory” system called Narrative. It captures information automatically from the camera and microphones and places it into a personal knowledge base. 


From APIs to Digital Twins: Warehouse Integration Strategies for Smarter Supply Chains

Digital twins create virtual replicas of warehouses and supply chains for monitoring and testing. A digital twin ingests live data from IoT sensors, machines, and transportation feeds to simulate how changes affect outcomes. For instance, GE’s “Digital Wind Farm” project feeds sensor data from each turbine into a cloud model, suggesting performance tweaks that boost energy output by ~20% (worth ~$100M more revenue per turbine). In warehousing, digital twins can model workflows (layout changes, staffing shifts, equipment usage) to identify bottlenecks or test improvements before physical changes. Paired with AI, these twins become predictive and prescriptive: companies can run thousands of what-if scenarios (like a port strike or demand surge) and adjust plans accordingly. ... Today’s warehouses are not just storage sheds; they are smart, interconnected nodes in the supply chain. Leveraging IIoT sensors, cloud APIs, AI analytics, robotics, and digital twins transforms logistics into a competitive advantage. Integrated systems reduce manual handoffs and errors: for example, automated picking and instant carrier booking can shorten fulfillment cycles from days to hours. Industry data bear this out, deploying these technologies can improve on-time delivery by ~20% and significantly lower operating costs.


Enterprise Software Spending Surges Despite AI ROI Shortfalls

AI capabilities increasingly drive software purchasing decisions. However, many organizations struggle with the gap between AI promise and practical ROI delivery. The disconnect stems from fundamental challenges in data accessibility and contextual understanding. Current AI implementations face significant obstacles in accessing the full spectrum of contextual data required for complex decision-making. "In complex use cases, where the exponential benefits of AI reside, AI still feels forced and contrived when it doesn't have the same amount and depth of contextual data required to read a situation," Kirkpatrick explained. Effective AI implementation requires comprehensive data infrastructure investments. Organizations must ensure AI models can access approved data sources while maintaining proper guardrails. Many IT departments are still working to achieve this balance. The challenge intensifies in environments where AI needs to integrate across multiple platforms and data sources. Well-trained humans often outperform AI on complex tasks because their experience allows them to read multiple factors and adjust contextually. "For AI to mimic that experience, it requires a wide range of data that can address factors across a wide range of dimensions," Kirkpatrick said. "That requires significant investment in data to ensure the AI has the information it needs at the right time, with the proper context, to function seamlessly, effectively, and efficiently."

Daily Tech Digest - August 12, 2022

7 best reasons to be a CISO

As they become key players in wider business matters, modern CISOs can develop their credentials and knowledge beyond hands-on security skills and abilities. “Our role is continuously expanding,” Smart says. “Today, I am also responsible for governance, risk and compliance, which opens up more avenues into setting a cohesive plan and strategy for security and risk management that impacts the whole business,” she adds. “The modern CISO can make use of a wide range of skills, beyond technical cybersecurity, and explore more areas of interest within the business,” Stapleton agrees. “As the cybersecurity landscape is constantly changing, there are always new and fascinating topics to dive into, so a CISO is never bored.” “The Disabled CISO,” the Twitter handle of an anonymous CISO of a global company, tells CSO that security now touches every part of the business, driving CISOs to positively engage with and learn from all corners of a company. “I love getting out and joining colleagues at the coalface. To protect the business, I need to understand how we operate and the challenges that presents to colleagues ..."


Should We Build Quantum Computers at All?

Using quantum computers, physicists want to simulate and unearth unusual states of matter; pharmaceutical companies want to discover new types of drugs; auto companies want to paint cars faster. While no one has conclusively demonstrated the utility of quantum computers, their potential seems endless. Emma McKay offers a provocative counterpoint. In the face of climate change, societal inequality, and other global problems, McKay, a PhD student in education at McGill University, thinks that perhaps we don’t need to develop quantum computing at all. “I haven’t seen any reasons compelling enough to me,” McKay, who uses they/them pronouns, told APS News. ... Maybe quantum annealers [a type of quantum computer] will be able to help us manage resources more efficiently. But it appears that people are most interested in using these types of technology to optimize things that suck, like optimizing traffic for single-person vehicles when widely available public transit, via buses and cycling infrastructure, is possible and the best way to reduce congestion and pollution from private vehicles in a city.


Are Application-Specific Chains the Future of Blockchain?

As decentralized application (dApps) developers gain more experience working with blockchains, some are running into limitations created by the parameters of blockchain architecture. Ethereum, for instance, allows for applications to be created via smart contracts, but does not allow for automatic execution of code. It also maintains fairly strict control over the way consensus and networking functions are exposed to those applications. To overcome these limitations, some developers are turning to application-specific blockchains — purpose-built and tuned for their specific application needs, and colloquially called “appchains.” One of the more popular options for building appchains is the Cosmos SDK, due to built-in composability, interconnected blockchains, and the ability for developers to maintain sovereignty over their blockchain. We’ve covered Cosmos in the past, including a developer academy for learning to build in the Cosmos Network and the addition of Interchain Security, which allows multiple Cosmos blockchains to align around common security protocols while maintaining sovereignty.


A Long-Awaited IoT Reverse Engineering Tool Is Finally Here

The tool was specifically designed to elucidate internet-of-things (IoT) device firmware and the compiled “binaries” running on anything from a home printer to an industrial door controller. Dubbed FRAK, the Firmware Reverse Analysis Console aimed to reduce overhead so security researchers could make progress assessing the vast and ever-growing population of buggy and vulnerable embedded devices rather than getting bogged down in tedious reverse engineering prep work. Cui promised that the tool would soon be open source and available for anyone to use. “This is really useful if you want to understand how a mysterious embedded device works, whether there are vulnerabilities inside, and how you can protect these embedded devices against exploitation,” Cui explained in 2012. “FRAK will be open source very soon, so we’re working hard to get that out there. I want to do one more pass, internal code review before you guys see my dirty laundry.” He was nothing if not thorough. A decade later, Cui and his company, Red Balloon Security, are launching Ofrak, or OpenFRAK, at DefCon in Las Vegas this week.


Is cloud computing immune from economic downturns?

First, and most important, many businesses now consider IT spending to be directly reflected in the value built within the enterprise. IT systems are no longer just for tactical uses such as processing transactions. Instead, cloud systems are becoming the business itself. The businesses disrupting their markets are doing so with their own unique innovations. They can only create these innovations by developing core IT systems using digital transformation processes and cloud computing. IT is no longer a cost center but an investment that needs to be nurtured. This new outlook is seen in manufacturing companies invested in supply chain automation using cloud-based artificial intelligence capabilities and cloud-based blockchain to lower costs and increase productivity. It’s seen in businesses that are entirely based on technology offerings, such as ride-sharing or residence-sharing applications. Many investors and company executives now believe software will define the future of business. IT is the engine that can build and use these systems; thus it’s a budgetary line item that boards and executives are reluctant to touch.


Cybersecurity and Technology Industry Leaders Launch Open-Source Project to Help Organizations Detect and Stop Cyberattacks Faster and More Effectively

"Every business deserves a simple, straightforward way to analyze and understand the security landscape – and that starts with their data," said John Graham-Cumming, CTO at Cloudflare. "By participating in the OCSF, we hope to help the entire security industry focus on doing the work that matters instead of wasting countless hours and resources on formatting data." "At CrowdStrike, our mission is to stop breaches and power productivity for organizations," said Michael Sentonas, Chief Technology Officer, CrowdStrike. "We believe strongly in the concept of a shared data schema, which enables organizations to understand and digest all data, streamline their security operations and lower risk. As a member of the OCSF, CrowdStrike is committed to doing the hard work to deliver solutions that organizations need to stay ahead of adversaries." "Modern cybersecurity operations is a team sport, and products must integrate with each other to provide value beyond what a single product can. Sure, it's possible to make that happen with open APIs and mapping data structures, but development and processing resources are not infinite," said Mohan Koo, Co-founder and CTO with DTEX Systems.


What Are Your Decision-Making Strengths and Blind Spots?

What do you do when you face an important but complicated decision? Do you turn to experts? Dig for data? Ask trusted friends and colleagues? Go with your gut? The truth is many of us approach decision making from the same perspective over and over. We use the same tools and habits every time, even if the decisions are vastly different. But following the same strategy for every problem limits your abilities. To make better decisions, you need to break out of these patterns and see things differently, even if it is uncomfortable. First, you need to understand your own decision-making strengths and your blind spots: What is the psychology of your decision making? What is your typical approach? What mental mistakes or cognitive biases tend to get in your way? Looking inward to what you value can illuminate why you make decisions the way you do — and how you might be shortchanging yourself with your approach. From there, you can disrupt your traditional processes.


The Rise of the ‘Fractional’ CMO and the Role CIOs Play

Relay Network 's CMO Tal Klein points out the CIO/CDO has a vested interest in the interplay of technology and business. “Depending on what marketing pillar the fractional CMO is being brought onboard to address, the CIO may care a lot if the fractional CMO is being brought in to address operational issues like lead generation or lead-to-opportunity conversion velocity,” he says. That's because that kind of work relies heavily on technology and may impact changes to the company's CRM, website, or even communication infrastructure. “Whereas if the fractional CMO is being brought it to address messaging or market positioning, the CIO may have less of a vested stake in the recruitment efforts,” Klein says. Klein adds other than the obvious infrastructure work associated with supporting marketing operations, the CIO or CDO may own a lot of the outputs from marketing engagements like the compliance issues. These could arise from capturing customer information, security ramifications associated with new tools or processes, and ensuring whatever prospect or customer data marketing needs in order to run effective campaigns is available to them.


Hybrid work: What's changed – and what hasn't

With an overwhelming number of employees saying they want hybrid work to become the new normal, flexible work arrangements are becoming integral to an organization’s hiring and retention strategies. Pre-pandemic, industries that offered work flexibility were often considered somewhat progressive and it was more the exception than the norm. Today, hybrid work is standard in a growing number of fields. Still, there are challenges. ... With employees potentially using personal devices and home wi-fi connections, IT security teams must constantly consider new vulnerabilities and strategies to remain safe. Clear policies and practices, along with training programs that reflect these new procedures are essential for any successful hybrid work model. On the positive side, hybrid work reduces the impact on our environment. Working remotely means less paper consumption and energy used to maintain office buildings and less waste from consumable products in the workplace. It also provides team members an opportunity to practice sustainability when working at home.


Why SAP systems need to be brought into the cybersecurity fold

The problem is exacerbated by the variety of attack vectors that cybercriminals are leveraging to target mission critical SAP systems, with applications often remaining vulnerable for extended periods due to security patches not being applied in a timely manner. In February we saw the Cybersecurity and Infrastructure Security Agency (CISA) urge admins to patch SAP NetWeaver against a critical vulnerability that could facilitate a range of attacks and even lead to operational shutdown. In the very same month, of the 22 security notes or updates issued by SAP, eight were deemed “Hot News”. Four were updates but of the remainder, three had a maximum CVSS score of 10 and the fourth 9.1. SAP is prolific in its patching. However, patches cannot be applied directly to productive systems, requiring downtime which is often not an option for mission-critical systems. Even when a business upgrades to SAP S/4HANA, the pressure to go-live can see security side-lined. ... Indeed, the earlier mentioned report reveals that exploits are attempted within 72 hours of SAP publicly announcing patches, while new SAP environments are being identified and attacked online within as little as three hours.



Quote for the day:

"I have a different vision of leadership. A leadership is someone who brings people together." -- George W. Bush

Daily Tech Digest - June 16, 2022

High-Bandwidth Memory (HBM) delivers impressive performance gains

In addition to widening the bus in order to boost bandwidth, HBM technology shrinks down the size of the memory chips and stacks them in an elegant new design form. HBM chips are tiny when compared to graphics double data rate (GDDR) memory, which it was originally designed to replace. 1GB of GDDR memory chips take up 672 square millimeters versus just 35 square millimeters for 1GB of HBM. Rather than spreading out the transistors, HBM is stacked up to 12 layers high and connected with an interconnect technology called ‘through silicon via’ (TSV). The TSV runs through the layers of HBM chips like an elevator runs through a building, greatly reducing the amount of time data bits need to travel. With the HBM sitting on the substrate right next to the CPU or GPU, less power is required to move data between CPU/GPU and memory. The CPU and HBM talk directly to each other, eliminating the need for DIMM sticks. “The whole idea that [we] had was instead of going very narrow and very fast, go very wide and very slow,” Macri said.


3 forces shaping the evolution of ERP

If there was any hesitation about moving to cloud-based ERP, it was quashed as the COVID crisis erupted, and corporate workplaces became scattered across countless home-based offices. On-premises ERP is seen as “not as scalable as people thought,” says Sharon Bhalaru, partner at accounting and technology consulting firm Armanino LLP. “We’re seeing a move to cloud-based systems,” to support remote employees who need to perform HR, financial and accounting tasks remotely. ... Next-generation ERP platforms “give companies real-time transparency with respect to sales, inventory, production, and financials,” the Boston Consulting Group analysts wrote. “Powerful data-driven analytics enables more agile decisions, such as adjustments to the supply chain to improve resilience. Robust e-commerce capabilities help companies better engage with online customers before and after a sale. And a lean ERP core and cloud-first approach increase deployment speed.” ... Unprecedented and ongoing supply chain disruptions underscore the need for greater visibility, more predictable lead times, alternative supply sources, and faster response to disruptions.


Interpol arrests thousands in global cyber fraud crackdown

The operation’s targets included telephone scammers, long-distance romance scammers, email fraudsters and other connected financial criminals, identified through a prior intelligence operation using Interpol’s secure global comms network, sharing data on suspects, suspicious bank accounts, unlawful transactions, and communications means such as phone numbers, email addresses, fake websites and IP addresses. “Telecom and BEC fraud are sources of serious concern for many countries and have a hugely damaging effect on economies, businesses and communities,” said Rory Corcoran. “The international nature of these crimes can only be addressed successfully by law enforcement working together beyond borders, which is why Interpol is critical to providing police the world over with a coordinated tactical response.” Duan Daqi, added: “The transnational and digital nature of different types of telecom and social engineering fraud continues to present grave challenges for local police authorities, because perpetrators operate from a different country or even continent than their victims and keep updating their fraud schemes.


Is Cyber Essentials Enough to Secure Your Organisation?

If you are to have confidence in your security controls, you must implement defence in depth. This requires a holistic approach to cyber security that addresses people, processes and technology. Key aspects of this aren’t addressed in Cyber Essentials, such as staff awareness training, vulnerability scanning and incident response. Employees are at the heart of any cyber security system, because they are the ones responsible for handling sensitive information. If they don’t understand their data protection requirements, it could result in disaster. Meanwhile, vulnerability scanning ensures that organisations can spot weaknesses in their systems before a cyber criminal can exploit them. It’s a more advanced form of protection than is offered with secure configuration and system updates, enabling organisations to proactively secure their systems. Conversely, incident response measures give organisations the tools they need to respond after a security incident has occurred. Most of the damage caused by a data breach occurs after the initial intrusion, so a prompt and organised response can be the difference between a minor disruption and a catastrophe.


Imagining a world without open standards

The open standard makes portability easier for software developers, provides integrators with choice in the building blocks for solutions, and enables customers to focus on solving business problems rather than integration issues. Open standards eliminate the need for organizations to expend energy wrangling with competitors on defining how systems should work, giving them the space and time to focus on building and improving how those systems actually do work. The real benefits, though, are downstream of vendors: open standards mean that businesses can effectively communicate and collaborate both internally and with peers. They mean that the expertise built up by a professional in one market or business can be taken with them wherever they want to work. They mean that a lack of knowledge resources is not the barrier that prevents businesses from making the move towards better, more efficient ways of working. In imagining a world without open standards, then, the image is one of businesses constantly having to navigate between the walled gardens of different technology vendors, reskilling and rehiring as they do so, before they can even begin the serious work of delivering value from that technology.


Good Habits That Every Programmer Should Have

We can become good at a specific technology by working with a particular technology for a long time. How can we become an expert in a specific technology? Learning internals is a great habit that supports us to become an expert in any technology. For example, after working some time with Git, you can learn Git internals via the lesser-known plumbing commands. You can make accurate technical decisions when you understand the internals of your technology stack. When you learn internals, you will indeed become more familiar with the limitations and workarounds of a specific technology. Learning internals also helps us to understand what we are doing with programming every day. Motivate everyone to learn further about their tools’ internals! ... Sometimes, we derive programming solutions from example code snippets that we can find on internet forums. It’s a good habit to give credit to other programmers’ hard work when we use their code snippets, libraries, and tools, even though their licensing documents say that attribution is not required.


Reducing Cybersecurity Security Risk From and to Third Parties

There are a number of ways in which organizations may be able to obtain attack information from third parties, if they agree. Ideally, such requirements should be included in service agreements and partnership contracts for vendors, outsourcers, and partners, as listed in the article, “Using Contracts to Reduce Cybersecurity Risks.” Employment contracts, nondisclosure agreements and license agreements may also include requirements that protect organizations against third-party risk. While it is helpful to request vendors, outsourcers and partners to commit to risk reduction in the contractual terms and conditions, it is even more beneficial for an organization to have direct access to partners’ and suppliers’ security monitoring systems. ... More modern forms of protection monitor messages for origin and content and respond with information about unauthorized sources—as with IDSs—or preventive action—as with IPSs. Advancements in these systems include observation of unusual behavior and the use of artificial intelligence (AI) to determine threats.


How Upskilling Could Resolve The Cybersecurity Skills Gap

With a shortage of new candidates, upskilling provides the answer to the cybersecurity skills gap. And it brings multiple benefits for both employees and businesses. One of the first is that, ultimately, cybersecurity is everyone’s business. From the CEO to the new employee at home, everyone has a role to play in ensuring systems are robust in the face of a growing wave of attacks. While this does not mean that everyone in a company needs to be a cybersecurity professional, it does mean that everyone should be aware of the risks, how to spot potential vulnerabilities and attacks and the practical measures they must take to prevent them. However, it can also produce a supply of cybersecurity professionals. Waiting for qualified entrants to the jobs market will take too long and, in practice, it’s likely they will not be qualified for long! The cybersecurity environment changes so rapidly, the knowledge many graduates gain at the start of their course may not be relevant by the end. Instead, identifying existing staff with the soft skills,or power skills, to develop, adapt, and learn may be the quickest and easiest path to take.


12 tips for achieving IT agility in the digital era

“If your tech stack is streamlined, easy to access, and easy to use, your workforce can quickly respond to business or customer needs seamlessly,” says Fleetcor’s duFour. Key to this is getting a handle on application sprawl by rationalizing the IT portfolio. Voya Financial’s simplification journey began with such an effort, a process that reduced its application footprint by 17% and its slate of technology tools by one quarter. The work continues as part of its cloud migration work. “This practice is instilling standards and discipline that will only help to ensure our environment remains uncluttered and contemporary for the long term,” Keshavan says. As a result, the IT group is faster and more flexible, recently deploying five new cloud services for data science and analytics developers to use within four hours —something that would have taken a cross-functional IT team several weeks to deploy in the past. Reining in application sprawl has also been valuable at Snow Software. “Oftentimes, companies and teams will invest in applications with similar purposes,” says Snow Software CIO Alastair Pooley. 


True Component-Testing of the GUI With Karate Mock Server

There’s an important reason why old-style end-to-end tests are often more expensive than needed: you tend to test paths that are not relevant to the frontend logic. Each of these adds to the total test suite run. Consider a web application for your tax return. The user journey in this non-trivial app consists of submitting a series of questionnaires, their content customized depending on what you answered in previous steps. There is likely some logic on the frontend to manage the turns in that user journey, but the number-crunching over your sources of income and deductibles surely happens on the backend. You don’t need a GUI test to validate the correctness of those calculations. With a mock backend that would be entirely pointless. You set it up to tell the frontend that the final amount to pay is 12600 Euros. You can test that this amount is properly displayed, but there’s no testing its correctness. All the decisions are made (and hopefully tested) elsewhere, so we can treat it as a hardcoded test fixture.



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - August 20, 2021

Identity security: a more assertive approach in the new digital world

Perimeter-based security, where organisations only allow trusted parties with the right privileges to enter and leave doesn’t suit the modern digitalised, distributed environment of remote work and cloud applications. It’s just not possible to put a wall around a business that’s spread across multiple private and public clouds and on-premises locations. This has led to the emergence of approaches like Zero-Trust – an approach built on the idea that organisations should not automatically trust anyone or anything – and the growth of identity security as a discipline, which incorporates Zero-Trust principles at the scale and complexity required by modern digital business. Zero-Trust frameworks demand that anyone trying to access an organisation’s system is verified every time before granting access on a ‘least privilege’ basis, which is particularly useful in the context of the growing need to audit machine identities. Typically, they operate by collecting information about the user, endpoint, application, server, policies and all activities related to them and feeding it into a data pool which fuels machine learning (ML).


How Can We Make It Easier To Implant a Brain-Computer Interface?

As for implantable BCIs, so far there is only the Blackrock NeuroPort Array (Utah Array) implant, which also has the largest number of subjects implanted and the longest documented implantation times, and the Stentrode from Synchron, that has just recorded its first two implanted patients. The latter is essentially based on a stent that is inserted into the blood vessels in the brain and used to record EEG-type data (local field potentials (LFPs)). It is a very clever solution and surgical approach, and I do believe that it has great potential for a subset of use cases that do not require the high level of spatial and temporal resolution that our electrodes are offering. I am also looking forward to seeing the device’s long term performance. Our device records single unit action potentials (i.e., signals from individual neurons) and LFPs with high temporal and spatial resolution and high channel count, allowing significant spatial coverage of the neural tissue. It is implanted by a neurosurgeon who creates a small craniotomy (i.e., opens a small hole in the skull and dura), inserts the devices in the previously determined location by manually placing it in the correct area.


Artificial Intelligence (AI): 4 characteristics of successful teams

In most instances, AI pilot programs show promising results but then fail to scale. Accenture surveys point to 84 percent of C-suite executives acknowledging that scaling AI is important for future growth, but a whopping 76 percent also admit that they are struggling to do so. The only way to realize the full potential of AI is by scaling it across the enterprise. Unfortunately, some AI teams think only in terms of executing a workable prototype to establish proof-of-concept, or at best transform a department or function. Teams that think enterprise-scale at the design stage can go successfully from pilot to enterprise-scale production. They often build and work on ML-Ops platforms to standardize the ML lifecycle and build a factory line for data preparation, cataloguing, model management, AI assurance, and more. AI technologies demand huge compute and storage capacities, which often only large, sophisticated organizations can afford. Because resources are limited, AI access is privileged in most companies. This compromises performance because fewer minds mean fewer ideas, fewer identified problems, and fewer innovations.


Software Testing in the World of Next-Gen Technologies

If there is a technology that has gained momentum during the past decade, it is nothing other than artificial intelligence. AI offers the potential to mimic human tasks and improvise the operations through its own intellect, the logic it brings to business shows scope for productive inferences. However, the benefit of AI can only be achieved by feeding computers with data sets, and this needs the right QA and testing practices. As long as automation testing implementation needs to be done for deriving results, performance could only be achieved by using the right input data leading to effective processing. Moreover, the improvement of AI solutions is beneficial not only for other industries, but QA itself, since many of the testing and quality assurance processes depend on automation technology powered by artificial intelligence. The introduction of artificial intelligence into the testing process has the potential to enable smarter testing. So, the testing of AI solutions could enable software technologies to work on better reasoning and problem-solving capabilities.


What Makes Agile Transformations Successful? Results From A Scientific Study

The ultimate test of any model is to test it with every Scrum team and every organization. Since this is not practically feasible, scientists use advanced statistical techniques to draw conclusions about the population from a smaller sample of data from that population. Two things are important here. The first is that the sample must be big enough to reliably distinguish effects from the noise that always exists in data. The second is that the sample must be representative enough of the larger population in order to generalize findings to it. It is easy to understand why. Suppose that you’re tasked with testing the purity of the water in a lake. You can’t feasibly check every drop of water for contaminants. But you can sample some of the water and test it. This sample has to be big enough to detect contaminants and small enough to remain feasible. It's also possible that contaminants are not equally distributed across the lake. So it's a good idea to sample and test a bucket of water at various spots from the lake. This is effectively what happens here.


OAuth 2.0 and OIDC Fundamentals for Authentication and Authorization

The main goal of OAuth 2.0 is delegated authorization. In other words, as we saw earlier, the primary purpose of OAuth 2.0 is to grant an app access to data owned by another app. OAuth 2.0 does not focus on authentication, and as such, any authentication implementation using OAuth 2.0 is non-standard. That’s where OpenID Connect (OIDC) comes in. OIDC adds a standards-based authentication layer on top of OAuth 2.0. The Authorization Server in the OAuth 2.0 flows now assumes the role of Identity Server (or OIDC Provider). The underlying protocol is almost identical to OAuth 2.0 except that the Identity Server delivers an Identity Token (ID Token) to the requesting app. The Identity Token is a standard way of encoding the claims about the authentication of the user. We will talk more about identity tokens later. ... For both these flows, the app/client must be registered with the Authorization Server. The registration process results in the generation of a client_idand a client_secret which must then be configured on the app/client requesting authentication.


How Biometric Solutions Are Shaping Workplace Security

Today, the corporate world and biometric technology go hand in hand. Companies cannot operate seamlessly without biometrics. Regular security checks just don’t cut it in companies anymore. Since biometric technologies are designed specifically to offer the highest level of security, there is limited to no room when it comes to defrauding these systems. Thus, technologies like ID Document Capture, Selfie Capture, 3D Face Map Creation, etc., are becoming the best way to secure the workplace. Biometric technology allows for specific data collection. It doesn’t just reduce the risk of a data breach but also protects important data in offices. Whether it’s cards, passwords, documents, etc., biometric technology eliminates the need for such hackable security implementations at the workplace. All biometric data like fingerprints, facial mapping, and so on are extremely difficult to replicate. Certain biological characteristics don’t change with time, and that prevents authentication errors. Hence, there’s limited scope for identity replication or mimicry. Customized personal identity access control has become an employee’s right of sorts. 


How to avoid being left behind in today’s fast-paced marketplace

The ability to speed up processes and respond more quickly to a highly dynamic market is the key to survival in today’s competitive business environment. For many large businesses, the ERP system forms a crucial part of the digital core, which is supplemented by best-of-breed applications in areas such as customer experience, supply chain, and asset management. When it comes to digitalisation, organisations will often focus on these applications and the connections between them. However, we often see businesses forget to automate processes in the digital core itself — an oversight that can negatively impact other digitalisation efforts. For example, the ability to analyse demand trends on social media in the customer-focused application can offer valuable insights, but if it takes months for the product data needed to launch a new product variant to be accessed, customer trends are likely to have already moved on. If we look more closely at the process of launching a new product to market, this is a prime example of where digital transformation can be applied to help manufacturers remain agile and respond to market trends more quickly. 


FireEye, CISA Warn of Critical IoT Device Vulnerability

Kalay is a network protocol that helps devices easily connect to a software application. In most cases, the protocol is implemented in IoT devices through a software development kit that's typically installed by original equipment manufacturers. That makes tracking devices that use the protocol difficult, the FireEye researchers note. The Kalay protocol is used in a variety of enterprise IoT and connected devices, including security cameras, but also dozens of consumer devices, such as "smart" baby monitors and DVRs, the FireEye report states. "Because the Kalay platform is intended to be used transparently and is bundled as part of the OEM manufacturing process, [FireEye] Mandiant was not able to create a complete list of affected devices and geographic regions," says Dillon Franke, one of the three FireEye researcher who conducted the research on the vulnerability. FireEye's Mandiant Red Team first uncovered the vulnerability in 2020. If exploited, the flaw can allow an attacker to remotely control a vulnerable device, "resulting in the ability to listen to live audio, watch real-time video data and compromise device credentials for further attacks based on exposed device functionality," the security firm reports.


An Introduction to Blockchain

The distributed ledger created using blockchain technology is unlike a traditional network, because it does not have a central authority common in a traditional network structure. Decision-making power usually resides with a central authority, who decides in all aspects of the environment. Access to the network and data is subject to the individual responsible for the environment. The traditional database structure therefore is controlled by power. This is not to say that a traditional network structure is not effective. Certain business functions may best be managed by a central authority. However, such a network structure is not without its challenges. Transactions take time to process and cost money; they are not validated by all parties due to limited network participation, and they are prone to error and vulnerable to hacking. To process transactions in a traditional network structure also requires technical skills. In contrast, the distributed ledger is control by rules, not a central authority. The database is accessible to all the members of the network and installed on all the computers that use the database. Consensus between members is required to add transactions to the database.



Quote for the day:

"Nothing is less productive than to make more efficient what should not be done at all." -- Peter Drucker

Daily Tech Digest - August 01, 2021

For tech firms, the risk of not preparing for leadership changes is huge

Tech execs should be more rigorous about succession planning for one important reason: institutional memory. Tech firms generally are younger than other companies of a similar size, which partly explains why the median age of S&P 500 companies plunged to 33 years in 2018 from 85 years in 2000, according to McKinsey & Co. These enterprises clearly have accomplished a lot in their short lives, but in their haste, most have not captured their history, unlike their longer-lived peers in other sectors. Less than half of these tech firms, in fact, have formally recorded their leader’s story for posterity. That puts them at a disadvantage when, inevitably, they will be required to onboard newcomers to their C-suites. It’s best to record this history well before the intense swirl of a leadership transition begins. Crucially, it will help the incoming and future generations of leadership understand critical aspects of its track record, the lessons learned, culture and identity. It also explains why the organization has evolved as it has, what binds people together and what may trigger resistance based on previous experience. It’s as much about moving forward as looking back.


The importance of having accountability in AI ethics

In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks. Restrictions are set to be introduced on uses of the technology that are identified as high-risk, with potential fines for violations. Fines could be up to 6pc of global turnover or €30m, depending on which is higher. But policing AI systems can be a complicated arena. Joanna J Bryson is professor of ethics and technology at the Hertie School of Governance in Berlin, whose research focuses on the impact of technology on human cooperation as well as AI and ICT governance. She is also a speaker at EmTech Europe 2021, which is currently taking place in Belfast as well as online. Bryson holds degrees in psychology and artificial intelligence from the University of Chicago, the University of Edinburgh and MIT. It was during her time at MIT in the 90s that she really started to pick up on the ethics around AI.


Data Platform: Data Ingestion Engine for Data Lake

When we design and build a Data Platform, we always need to evaluate if automation provides enough value to compensate the team effort and time. Time is the only resource that we can not scale. We can increase the team, but the relationship between people and productivity is not direct. Sometimes when a team is very focused on the automation paradigm, people want to automate everything, even actions that only require one time or do not provide real value. ... Usually, this is not an easy decision, and it has to be evaluated by all the team. In the end, it is an ROI decision. I don't like this concept very much because it often focuses on economic costs and forgets about people and teams. Before starting any design and development, we have to analyze if there are tools available to cover our needs. As software engineers, we often want to develop our software. But, from a team or product view, we should focus our efforts on the most valuable components and features. The goal of the Data Ingestion Engine is to make it easier the data ingestion from the data source into our Data Platform providing a standard, resilient and automated ingestion layer.


Beyond OAuth? GNAP for Next Generation Authentication

With GNAP, a client can ask for multiple access tokens in one grant request (vs. multiple requests). For instance, you could request read privileges on one resource and read and write privileges on another. ... In GNAP, the requesting client declares what kinds of interactions it supports. The authorization server responds to the request with an interaction to be used to communicate with the resource owner or the resource client. These interactions are defined in the GNAP spec as first-class objects, which provides extension points for future communication. Interactions may include redirecting the browser, opening a deeplink URL in a mobile application or providing a user code to be used elsewhere. ... GNAP provides a grant identifier if the authorization server determines a grant can be continued, unlike OAuth2. In the sample below, the grant identifier, access_token.value, can be presented to the authorization server if the grant needs to be modified or continued after the initial request.


The Future Of Work Will Demand These 8 New Skills

Closely related to entrepreneurship is resilience. Humans are nothing if not adaptable but embracing shifts and bouncing forward (rather than back) will require new competencies. The skill of resilience requires you to 1) stay aware of new information 2) make sense of it 3) reinvent, innovate and solve problems. Finding fresh approaches and flexing based on your insights will be fundamental to success. ... Inherent to moving forward, is the ability to believe in a positive future and focus on possibilities. When experts find fault with a lack of responsiveness, it’s often the result of a lack of imagination. The skills of being able to envision and foresee what might happen are critical to staying motivated, inspired and driven to create new beginnings. ... Success has always been about your network, but achievement in the future will depend even more on the strength of relationships. Your social capital and primary, secondary and tertiary relationships will be critical netting to offer you new learning, access to new opportunities and social support. The new skill will be the ability to build rapport—and to build it quickly and it from a distance.


Will Artificial Intelligence Be the End of Web Design & Development

Whilst there has been plenty of hype in recent years around the impact AI will have to the website design and development community, the reality is that Artificial (Design) Intelligence technology is still very-much in its infancy …and there’s a long way to go before we see web designers and developers being replaced by robots. AI-powered platforms and tools are actually making digital creatives and engineers more productive and more effective, allowing them to produce higher-quality, digital experiences at a lower-cost. The concept behind using Artificial Intelligence to create websites is quite simple: AI-powered code-completion tools are used to “make” a website on its own and then machine learning is leveraged to optimize the user interface – entirely through adaptive intelligence, with minimal human intervention. ... The power of human creativity brings with it an innate curiosity; we are always looking to challenge the status-quo and experiment with new forms and aesthetics. Creativity will always be a human endeavor. 


Intelligent ERP: What It Takes To Thrive In A World Of Big Data

While challenging, this requirement led to an innovation that helped the payment services provider optimize its financial operations and better understand and expand its business. ZPS collaborated with the University of Seville in Spain to build a customized cash-flow model to uncover valuable liquidity and financial planning insights. Within this guarantee-monitoring model, ZPS uses Intelligent ERP to replicate data on contract accounts receivable in near-real time to a business warehousing solution and other reporting applications. An in-memory database then processes the data, calculates key figures such as customer cash-in and factoring cash-outs, and uses these figures to determine the amounts to be guaranteed each day. Furthermore, with a live connection to its business warehousing solution, ZPS uses a cloud-based analytics solution to let employees access calculated data and consume reports through intuitive dashboards and predictive stories. By amplifying the value of its Big Data with Intelligent ERP and augmented analytics, ZPS allows a larger circle of business users to gain insights into financial KPIs, such as gross customer cash-ins or days from order. 


Is McKinsey wrong about the financial benefits of diversity?

The authors emphasize that this isn’t definitive proof that there is no connection between racial and ethnic diversity and profits—more research is needed on that front. They also note several other important caveats, including that S&P 500 companies are not a random sample of public US firms, and that their method of identifying race and ethnicity among executives (using faces and names) is likely to overestimate the number of white executives. But they criticize McKinsey’s methodology, including its metric for measuring diversity among executives. They conclude that “caution is warranted in relying on McKinsey’s findings to support the view that US publicly traded firms can deliver improved financial performance if they increase the racial/ethnic diversity of their executives.” Among the additional research that Green and Hand call for is a way to better examine whether there is any causal relationship between a firm’s diversity and its financial performance. McKinsey, by its own admission, is only looking at correlation. 


Data scientists continue to be the sexiest hires around

With the value of data science clear in the potential of these industries, there is no reason to believe data science will be anything but a growing profession for years and years to come. AI adoption alone has skyrocketed in recent years. Now, half of all surveyed organizations say they have applied AI to fulfill at least one function, with many more intending to invest in data-driven solutions. As the accessibility and power of data become more common, so too does the need for data scientists. Now, data scientists must help businesses navigate a world of global data collection and applications. From securing business processes to meeting international data security standards to connecting new and vital patterns in business trends, data scientists are vital to the success of innumerable businesses across industries. One such measure they can be part of is setting global data security standards for various industries. Data science is still one of the sexiest jobs you can have because it increasingly means helping people and saving money. 


Stanford Researchers Put Deep Learning On A Data Diet

With the cost for deep learning model training on the rise, individual researchers and small organisations are settling for pre-trained models. Today, the likes of Google or Microsoft have budgets (read:millions of dollars) for training state of the art language models. Meanwhile, efforts are underway to make the whole paradigm of training less daunting for everyone. Researchers are actively exploring ways to maximise training efficiency to make models run faster and use less memory. A common practice is to train small models until they converge and then run a compression technique lightly. Techniques like parameter pruning have already become popular for reducing redundancies without sacrificing accuracy. In pruning, redundancies in the model parameters are explored, and the uncritical yet redundant ones are removed. Identifying important training data plays a role in online and active learning. But how much of the data is superfluous? ... For instance, the capabilities of computer vision systems have improved greatly due to (a) deeper models with high complexity, (b) increased computational power and (c) availability of large-scale labeled data. 



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - August 20, 2020

11 penetration testing tools the pros use

Formerly known as BackTrack Linux and maintained by the good folks at Offensive Security (OffSec, the same folks who run the OSCP certification), Kali is optimized in every way for offensive use as a penetration tester. While you can run Kali on its own hardware, it's far more common to see pentesters using Kali virtual machines on OS X or Windows. Kali ships with most of the tools mentioned here and is the default pentesting operating system for most use cases. Be warned, though--Kali is optimized for offense, not defense, and is easily exploited in turn. Don't keep your super-duper extra secret files in your Kali VM. ... Why exploit when you can meta-sploit? This appropriately named meta-software is like a crossbow: Aim at your target, pick your exploit, select a payload, and fire. Indispensable for most pentesters, metasploit automates vast amounts of previously tedious effort and is truly "the world's most used penetration testing framework," as its website trumpets. An open-source project with commercial support from Rapid7, Metasploit is a must-have for defenders to secure their systems from attackers.


The Role of Business Analysts in Agile

A few things that we as BA Managers need to be aware of include: Understanding of the role - because of a BA’s ability to be a flexible, helpful and an overall "fill-in-the-gaps" person, the role of the BA gets blurrier and blurrier. This is what makes it interesting and also so great when it comes to working within an agile team. Ultimately it also makes it complicated to explain to others, especially those unfamiliar with the role. If it is complicated to explain, it is easy for people to underestimate the value it brings so make sure you are clear in your "pitch" of what your BAs do! Being pigeonholed into the role - if you are a great BA, nobody wants to lose you so they will continue giving you BA work even if you want to go into something else like project management. It is key for those managing BAs to actively support their career aspirations even if they are outside of the discipline, and to lobby on their behalf. Hitting an analysis complexity "ceiling" - if you are constantly with your team and helping them solve delivery problems, it is very hard to dedicate focused analysis time on upcoming large initiatives.


Cisco bug warning: Critical static password flaw in network appliances needs patching

The flaws reside in the Cisco Discovery Protocol, a Layer 2 or data link layer protocol in the Open Systems Interconnection (OSI) networking model. "An attacker could exploit these vulnerabilities by sending a malicious Cisco Discovery Protocol packet to the targeted IP camera," explains Cisco in the advisory for the flaws CVE-2020-3506 and CVE-2020-3507. "A successful exploit could allow the attacker to execute code on the affected IP camera or cause it to reload unexpectedly, resulting in a denial-of-service (DoS) condition." The Cisco cameras are vulnerable if they are running a firmware version earlier than 1.0.9-4 and have the Cisco Discovery Protocol enabled. Again, customers need to apply Cisco's update to protect the model because there's no workaround. This bug was reported to Cisco by Qian Chen of Qihoo 360 Nirvan Team. However, Cisco notes it is not aware of any malicious activity using this vulnerability.  The second high-severity advisory concerns a privilege-escalation flaw affecting the Cisco Smart Software Manager On-Prem or SSM On-Prem. It's tracked as CVE-2020-3443 and has a severity score of 8.8 out of 10.


Fuzzing Services Help Push Technology into DevOps Pipeline

"Fuzzing by its very nature is this idea of automated continuous testing," he says. "There is not a lot of human input that is necessary to gain the benefits of fuzz testing in your environment. It's a good fit from the idea of automation and continuous testing, along with this idea of continuous development." Many companies are aiming to create agile software development processes, such as DevOps. Because this change often takes many iterative cycles, advanced testing methods are not usually given high priority. Fuzz testing, the automated process of submitting randomized or crafted inputs into the application, is one of these more complex techniques. Even within the pantheon of security technologies, fuzzing is often among the last adopted. Yet, 2020 may be the year that changes. Major providers and even frameworks have focused on making fuzzing easier, says David Haynes, a product security engineer at Cloudflare. "I think we are just getting started in terms of seeing fuzzing becoming a bit more mainstream, because the biggest factor hindering (its adoption) was available tooling," he says. "People accept that integration testing is needed, unit testing is needed, end-to-end testing is needed, and now, that fuzz testing is needed."


Why We Need Lens as a Kubernetes IDE

The current version of Lens vastly improves quality of life for developers and operators managing multiple clusters. It installs on Linux, Mac or Windows desktops, and lets you switch from cluster to cluster with a single click, providing metrics, organizing and exposing the state of everything running in the cluster, and letting you edit and apply changes quickly and with assurance. Lens can hide all the ephemeral complexity of setting up cluster access. It lets you add clusters manually by browsing to their kubeconfigs, and can automatically discover kubeconfig files on your local machine. You can manage local or remote clusters of virtually any flavor, on any infrastructure or cloud. You can also organize clusters into workgroups any way you like and interact with these subsets. This capability is great for DevOps and SREs managing dozens or hundreds of clusters or just helping to manage cluster sprawl. Lens installs whatever version of kubectl is required to manage each cluster, eliminating the need to manage multiple versions directly. It works entirely within the constraints each cluster’s role-based access control (RBAC) imposes on identity, so Lens users (and teams of users) can see and interact only with permitted resources.


Computer scientists create benchmarks to advance quantum computer performance

The computer scientists created a family of benchmark quantum circuits with known optimal depths or sizes. In computer design, the smaller the circuit depth, the faster a computation can be completed. Smaller circuits also imply more computation can be packed into the existing quantum computer. Quantum computer designers could use these benchmarks to improve design tools that could then find the best circuit design. “We believe in the ‘measure, then improve’ methodology,” said lead researcher Jason Cong, a Distinguished Chancellor’s Professor of Computer Science at UCLA Samueli School of Engineering. “Now that we have revealed the large optimality gap, we are on the way to develop better quantum compilation tools, and we hope the entire quantum research community will as well.” Cong and graduate student Daniel (Bochen) Tan tested their benchmarks in four of the most used quantum compilation tools. Tan and Cong have made the benchmarks, named QUEKO, open source and available on the software repository GitHub.


Starting strong when building your microservices team

We’re used to hearing the slogan ‘Go big or go home’, but businesses would do well to think small when developing microservices. Here, developing manageable and reusable components will enable companies, partners and customers to use individual microservices across an entire landscape of applications and industries. In doing so, businesses aren’t restricting themselves to siloed applications. In addition, driving success with microservices involves considerable planning to ensure that nothing is left out. After all, microservices-based architecture consists of many moving parts and so developers should be mindful to guarantee service interactions are seamless from start to finish. The pandemic has shone a spotlight on the role of digital transformation in building up crisis resilience. Consequently, businesses are turning en masse to digital and the market is evolving apace. However, as operational and business models shift, companies must be mindful to avoid becoming locked-in to cloud vendor technologies and platforms in such a rapidly changing market. When working with a cloud partner, implementing their platform and other solutions shouldn’t be a given – while such tools will likely work fine in their own cloud environment, companies should be wary of how they will operate elsewhere.


From Legacy to Intelligent ERP: A Blueprint for Digital Transformation

Today’s ERP configuration is for running today’s business. Most run in the data center and capture, manage, and report on all core business transactions. Tomorrow’s intelligent ERP goes far beyond this charter. If you want to be part of the team transforming the business, then you should understand the vision of where the company is targeting growth over the next several years. What markets, products, and services are the priorities? What operations need to scale? What improvements in workflows can free up cash or make financial forecasting more reliable? How can you empower employees, teams, and departments to work efficiently, safely, and effectively as some people return to the office and others work remotely? Intelligent ERPs not only centralize operational workflows and data from sales, marketing, finance, and operations. These RPS also extend data capture, workflow, and analytics around prospects and customers and their experiences interacting with the business. When fully implemented, they enable a full 360-degree view of the customer across all areas of the company that interface with them from marketing to sales, through digital commerce, and from any customer support activities.


Researchers improve perception of robots with new hearing capabilities

Working out of the Robotics Institute at Carnegie Mellon University, Pinto, as well as fellow researchers Dhiraj Gandhi and Abhinav Gupta, presented their findings during the virtual Robotics: Science and Systems conference last month. The three started the project last June, according to a release from the university. "We present three key contributions in this paper: (a) we create the largest sound-action-vision robotics dataset; (b) we demonstrate that we can perform fine grained object recognition using only sound; and (c) we show that sound is indicative of action, both for post-interaction prediction, and pre-interaction forward modeling," they write in the study. "In some domains like forward model learning, we show that sound in fact provides more information than can be obtained from visual information alone." In the published study, the three researchers said sounds did help a robot differentiate between objects and predict the physical properties of new objects. They also found that hearing helped robots determine what type of action caused a particular sound. Robots using sound capabilities were able to successfully classify objects 76% of the time, according to Pinto and the study.


Running Axon Server in Docker and Kubernetes

“Breaking down the monolith” is the new motto, as we finally get driven home the message that gluttony is also a sin in application land. If we want to be able to change in step with our market, we need to increase our deployment speed, and just tacking on small incremental changes has proven to be a losing game. No, we need to reduce interdependencies, which ultimately also means we need to accept that too much intelligence in the interconnection layer worsens the problem rather than solving it, as it sprinkles business logic all over the architecture and keeps creating new dependencies. Martin Fowler phrased it as “Smart endpoints and dumb pipes”, and as we do this, we increase application components’ autonomy and you’ll notice the individual pieces can finally start to shrink. Microservices architecture is a consequence of an increasing drive towards business agility, and woe to those who try to reverse that relationship. Imposing Netflix’s architecture on your organization to kick-start a drive for Agile development can easily destroy your business.



Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis