Showing posts with label neural networks. Show all posts
Showing posts with label neural networks. Show all posts

Daily Tech Digest - February 05, 2025


Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." --Philippos


Neural Networks – Intuitively and Exhaustively Explained

The process of thinking within the human brain is the result of communication between neurons. You might receive stimulus in the form of something you saw, then that information is propagated to neurons in the brain via electrochemical signals. The first neurons in the brain receive that stimulus, then each neuron may choose whether or not to "fire" based on how much stimulus it received. "Firing", in this case, is a neurons decision to send signals to the neurons it’s connected to. ... Neural networks are, essentially, a mathematically convenient and simplified version of neurons within the brain. A neural network is made up of elements called "perceptrons", which are directly inspired by neurons. ... In AI there are many popular activation functions, but the industry has largely converged on three popular ones: ReLU, Sigmoid, and Softmax, which are used in a variety of different applications. Out of all of them, ReLU is the most common due to its simplicity and ability to generalize to mimic almost any other function. ... One of the fundamental ideas of AI is that you can "train" a model. This is done by asking a neural network (which starts its life as a big pile of random data) to do some task. Then, you somehow update the model based on how the model’s output compares to a known good answer.


Why honeypots deserve a spot in your cybersecurity arsenal

In addition to providing critical threat intelligence for defenders, honeypots can often serve as helpful deception techniques to ensure attackers focus on decoys instead of valuable and critical organizational data and systems. Once malicious activity is identified, defenders can use the findings from the honeypots to look for indicators of compromise (IoC) in other areas of their systems and environments, potentially catching further malicious activity and minimizing the dwell time of attackers. In addition to threat intelligence and attack detection value, honeytokens often have the benefit of having minimal false positives, given they are highly customized decoy resources deployed with the intent of not being accessed. This contrasts with broader security tooling, which often suffers from high rates of false positives from low-fidelity alerts and findings that burden security teams and developers. ... Enterprises need to put some thought into the placement of the honeypots. It is common for them to be used in environments and systems that may be potentially easier for attackers to access, such as publicly exposed endpoints and systems that are internet accessible, as well as internal network environments and systems. The former, of course, is likely to get more interaction and provide broader generic insights. 


IoT Technology: Emerging Trends Impacting Industry And Consumers

An emerging IoT trend is the rise of emotion-aware devices that use sensors and artificial intelligence to detect human emotions through voice, facial expressions or physiological data. For businesses, this opens doors to hyper-personalized customer experiences in industries like retail and healthcare. For consumers, it means more empathetic tech—think stress-relieving smart homes or wearables that detect and respond to anxiety. ... The increasing prevalence of IoT tech means that it is being increasingly deployed into “less connected” environments. As a result, the user experience needs to be adapted so that it’s not wholly dependent on good connectivity—instead, priorities must include how to gracefully handle data gaps and robust fallbacks with missing control instructions. ... IoT systems can now learn user preferences, optimizing everything from home automation to healthcare. For businesses, this means deeper customer engagement and loyalty; for consumers, it translates to more intuitive, seamless interactions that enhance daily life. ... While not a newly emerging trend, the Industrial Internet of Things is an area of focus for manufacturers seeking greater efficiency, productivity and safety. Connecting machines and systems with a centralized work management platform gives manufacturers access to real-time data. 


When digital literacy fails, IT gets the blame

By insisting that requisite digital skills and system education are mastered before a system cutover occurs, the CIO assumes a leadership role in the educational portion of each digital project, even though IT itself may not be doing the training. Where IT should be inserting itself is in the area of system skills training and testing before the system goes live. The dual goals of a successful digital project should be two-fold: a system that’s complete and ready to use; and a workforce that’s skilled and ready to use it. ... IT business analysts, help desk personnel, IT trainers, and technical support personnel all have people-helping and support skills that can contribute to digital education efforts throughout the company. The more support that users have, the more confidence they will gain in new digital systems and business processes — and the more successful the company’s digital initiatives will be. ... Eventually, most of the technical glitches were resolved, and doctors, patients, and support medical personnel learned how to integrate virtual visits with regular physical visits and with the medical record system. By the time the pandemic hit in 2019, telehealth visits were already well under way. These visits worked because the IT was there, the pandemic created an emergency scenario, and, most importantly, doctors, patients, and medical support personnel were already trained on using these systems to best advantage.


What you need to know about developing AI agents

“The success of AI agents requires a foundational platform to handle data integration, effective process automation, and unstructured data management,” says Rich Waldron, co-founder and CEO of Tray.ai. “AI agents can be architected to align with strict data policies and security protocols, which makes them effective for IT teams to drive productivity gains while ensuring compliance.” ... One option for AI agent development comes directly as a service from platform vendors that use your data to enable agent analysis, then provide the APIs to perform transactions. A second option is from low-code or no-code, automation, and data fabric platforms that can offer general-purpose tools for agent development. “A mix of low-code and pro-code tools will be used to build agents, but low-code will dominate since business analysts will be empowered to build their own solutions,” says David Brooks, SVP of Evangelism at Copado. “This will benefit the business through rapid iteration of agents that address critical business needs. Pro coders will use AI agents to build services and integrations that provide agency.” ... Organizations looking to be early adopters in developing AI agents will likely need to review their data management platforms, development tools, and smarter devops processes to enable developing and deploying agents at scale.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. ... Consider a scenario where a company has a privileged Windows account with access to 100 servers. If PAM is instructed to discover the scope of this Windows account, it might only identify the servers that have been accessed previously by the account, without revealing the full extent of its access or the actions performed.


Quantum networking advances on Earth and in space

“The most established use case of quantum networking to date is quantum key distribution — QKD — a technology first commercialized around 2003,” says Monga. “Since then, substantial advancements have been achieved globally in the development and production deployment of QKD, which leverages secure quantum channels to exchange encryption keys, ensuring data transfer security over conventional networks.” Quantum key distribution networks are already up and running, and are being used by companies, he says, in the U.S., in Europe, and in China. “Many commercial companies and startups now offer QKD products, providing secure quantum channels for the exchange of encryption keys, which ensures the safe transfer of data over traditional networks,” he says. Companies offering QKD include Toshiba, ID Quantique, LuxQuanta, HEQA Security, Think Quantum, and others. One enterprise already using a quantum network to secure communications is JPMorgan Chase, which is connecting two data centers with a high-speed quantum network over fiber. It also has a third quantum node set up to test next-generation quantum technologies. Meanwhile, the need for secure quantum networks is higher than ever, as quantum computers get closer to prime time.


What are the Key Challenges in Mobile App Testing?

One of the major issues in mobile app testing is the sheer variety of devices in the market. With numerous models, each having different screen sizes, pixel densities, operating system (OS) versions and hardware specifications, ensuring the app is responsive across all devices becomes a task. Testing for compatibility on every device and OS can be tiresome and expensive. While tools like emulators and cloud-based testing platforms can help, it remains essential to conduct tests on real devices to ensure accurate results. ... In addition to device fragmentation, another key challenge is the wide range of OS versions. A device may run one version of an OS while another runs on a different version, leading to inconsistencies in app performance. Just like any other software, mobile apps need to function seamlessly across multiple OS versions, including Android, iPhone Operating System (iOS) and other platforms. Furthermore, OS are updated frequently, which can cause apps to break or not function. ... Mobile app users interact with apps under various network conditions, including Wi-Fi, 4G, 5G or limited connectivity. Testing how an app performs in different network conditions is crucial to ensure it does not hang or load slowly when the connection is weak. 


Reimagining KYC to Meet Regulatory Scrutiny

Implementing AI and ML allows KYC to run in the background rather than having staff manually review information as they can, said Jennifer Pitt, senior analyst for fraud and cybersecurity with Javelin Strategy & Research. “This allows the KYC team to shift to other business areas that require more human interaction like investigations,” Pitt said. Yet use of AI and ML remains low at many banks. Currently, fraudsters and cybercriminals are using generative adversarial networks - machine learning models that create new data that mirrors a training set - to make fraud less detectable. Fraud professionals should leverage generative adversarial networks to create large datasets that closely mirror actual fraudulent behavior. This process involves using a generator to create synthetic transaction data and a discriminator to distinguish between real and synthetic data. By training these models iteratively, the generator improves its ability to produce realistic fraudulent transactions, allowing fraud professionals to simulate emerging fraud types and account takeovers, and enhance detection models’ sensitivity to these evolving threats. Instead of waiting to gather sufficient historical data from known fraudulent behaviors, GANs enable a more proactive approach, helping fraud teams quickly understand new fraud trends and patterns, Pitt said.


How Agentic AI Will Transform Banking (and Banks)

Agentic AI has two intertwined vectors. For banks, one path is internal, and focused on operational efficiency for tasks including the automation of routine data entry and compliance and regulatory checks, summaries of email and reports, and the construction of predictive models for trading and risk management to bolster insights into market dynamics, fraud and credit and liquidity risk. The other path is consumer facing, and revolves around managing customer relationships, from automated help desks staffed by chatbots to personalized investment portfolio recommendations. Both trajectories aim to improve efficiency and reduce costs. Agentic AI "could have a bigger impact on the economy and finance than the internet era," Citigroup wrote in a January 2025 report that calls the technology the "Do It For Me" Economy. ... Meanwhile, automated AI decisions could inadvertently violate laws and regulations on consumer protection, anti-money laundering or fair lending laws. Agentic AI that can instruct an agent to make a trade based on bad data or assumptions could lead to financial losses and create systemic risk within the banking system. "Human oversight is still needed to oversee inputs and review the decisioning process," Davis says. 

Daily Tech Digest - March 03, 2024

The most popular neural network styles and how they work

Feedforward networks are perhaps the most archetypal neural net. They offer a much higher degree of flexibility than perceptrons but still are fairly simple. The biggest difference in a feedforward network is that it uses more sophisticated activation functions, which usually incorporate more than one layer. The activation function in a feedforward is not just 0/1, or on/off: the nodes output a dynamic variable. ... Recurrent neural networks, or RNNs, are a style of neural network that involve data moving backward among layers. This style of neural network is also known as a cyclical graph. The backward movement opens up a variety of more sophisticated learning techniques, and also makes RNNs more complex than some other neural nets. We can say that RNNs incorporate some form of feedback. ... Convolutional neural networks, or CNNs, are designed for processing grids of data. In particular, that means images. They are used as a component in the learning and loss phase of generative AI models like stable diffusion, and for many image classification tasks. CNNs use matrix filters that act like a window moving across the two-dimensional source data, extracting information in their view and relating them together. 


The startup CIO’s guide to formalizing IT for liquidity events

“You have to stop fixing problems in the data layer, relying on data scientists to cobble together the numbers you need. And if continuing that approach is advocated by the executives you work with, if it’s considered ‘good enough,’ quit,” he says. “Getting the numbers right at the source requires that you straighten out not only the systems that hold the data, all those pipelines of information, but also the processes whereby that data is captured and managed. No tool will ever entirely erase the friction of getting people to enter their data in a CRM.” The second piece to getting the numbers right comes at the end: closing the books. While this process is a near ubiquitous struggle for all growing companies, Hoyt offers two points of optimism. “First,” he explains, “many teams struggle to close the books simply because the company hasn’t invested in the proper tools. They’ve kicked the can down the street. And second, you have a clear metric of improvement: the number of days taken to close.” Hoyt suggests investing in the proper tools and then trying to shave the days-to-close each quarter. Get your numbers right, secure your company, bring it into compliance, and iron out your ops and infrastructure. 


Majority of commercial codebases contain high-risk open-source code

Advocates of open-source software have long argued that many eyes on code lead to fewer bugs and vulnerabilities, and the report doesn’t disprove that assertion, McGuire said. “If anything, the report supports that belief,” he said. “The fact that there are so many disclosed vulnerabilities and CVEs serves as a testament to how active, vigilant, and reactive the open-source community is, especially when it comes to addressing security issues. It’s this very community that is doing the discovery, disclosure, and patching work.” However, users of open-source software aren’t doing a good job of managing it or implementing the fixes and workarounds provided by the open-source community, he said. The primary purpose of the report is to raise awareness about these issues and to help users of open-source software better mitigate the risks, he said. “We would never recommend any software producer avoid using, or tamp down their usage, of open source,” he added. “In fact, we would argue the opposite, as the benefits of open source far outweigh the risks.” Open-source software has accelerated digital transformation and allowed companies to develop innovative applications that consumers want, he said. 


From gatekeeper to guardian: Why CISOs must embrace their inner business superhero

You, the CISO, are no longer just the security guard at the front gate. You're the city planner, the risk management consultant, the chief resilience officer, and the chief of police all rolled into one. You need to understand the flow of traffic, the critical infrastructure, and the potential vulnerabilities lurking in every alleyway. But how do we, the guardians of the digital realm, transform into these business superheroes? Fear not, fellow CISOs, for the path to upskilling and growth is paved with strategic learning, effective communication, and more than a dash of inspirational or motivational leadership. ... As the lone wolf days have ended, so too have the days when technical expertise alone could guarantee a CISO’s success. Today's CISO needs to be a voracious learner, constantly expanding their knowledge and skills. ... Failure to effectively communicate is a career killer for any CXO. To be influential, especially with the C-suite, CISOs must learn to speak in ways understood by their C-suite peers. Imagine how your eyes may glaze over when a CFO starts talking capex, opex, or EBITDA. Realize the same will happen for these cybersecurity “outsiders.”


Looking good, feeling safe – data center security by design

For data centers in shared spaces, sometimes turning data halls into display features is a way to make them secure. Keeping compute in a secure but openly visible space means it’s harder to do anything unnoticed. It may also help some engineers be more mindful about keeping the halls tidy and cabling neat. “Some people keep data centers behind closed walls and keep them hidden and private. Others use them as features,” says Nick Ewing, managing director at UK modular data center provider EfficiencyIT. “The best ones are the ones where the customers like to make a feature of the environment and use it to use it as a bit of a display.” An example he cites is the Wellcome Sanger Institute in Cambridge, where they have four data center quadrants. Each quadrant is about 100 racks; they have man traps at either end of the data center corridor. But one end of the main quadrant is full of glass. “They have an LED display, which is talking about how many cores of compute, how much storage they’ve got, how many genomic sequences they've they've sequenced that day,” he says. “They've used it as a feature and used it to their advantage.”


Neuromorphic computing: The future of IoT

The adoption of neuromorphic computing in IoT promises many benefits, ranging from enhanced processing power and energy efficiency to increased reliability and adaptability. Here are some key advantages: More Powerful AI: Neuromorphic chips enable IoT devices to handle complex tasks with unprecedented speed and efficiency. By collocating memory and processing and leveraging parallel processing capabilities, these chips overcome the limitations of traditional architectures, resulting in near-real-time decision-making and enhanced cognitive abilities. Lower Power Consumption: One of the most significant advantages of neuromorphic computing is its energy efficiency. By adopting an event-driven approach and utilizing components like memristors, neuromorphic systems minimize energy consumption while maximizing performance, making them ideal for power-constrained IoT environments. Extensive Edge Networks: With the proliferation of edge computing, there is a growing need for IoT devices that can process data locally in real-time. Neuromorphic computing addresses this need by providing the processing power and adaptability required to run advanced applications at the edge, reducing reliance on centralized servers and improving overall system responsiveness.


Decentralizing the AR Cloud: Blockchain's Role in Safeguarding User Privacy

For devices to interpret the world, their camera needs access to have some kind of digital counterpart that it can cross reference. And that digital counterpart of the world is much too complex to fit inside one device. Therefore, the AR cloud has been developed. The AR cloud is a network of computers that work to help devices understand the physical world. ... The AR cloud is akin to an API to the world. The implications for applications that require knowledge about location, context, and more are considerable. In AR, the data is intimate data about where we are, who we are with, what we’re saying, looking at, and even what our living quarters look like. AR devices can read our facial expressions, and more, similar to how the Apple Watch can measure the heart rates of its wearers. Digital service providers will have access to a bevy of information and also insight into our thinking, wants, needs, and desires. Storing that data in a centralized server that is opaque is cause for concern. Blockchain allows people to take that same intimate private data, and put it on their own server from which they could access the wondrous world of AR minus such egregious privacy concerns. 


Five ways AI is helping to reduce supply chain attacks on DevOps teams

Attackers are using AI to penetrate an endpoint to steal as many forms of privileged access credentials as they can find, then use those credentials to attack other endpoints and move throughout a network. Closing the gaps between identities and endpoints is a great use case for AI. A parallel development is also gaining momentum across the leading extended detection and response (XDR) providers. CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year, “One of the areas that we’ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.” Leading XDR platform providers include Broadcom, Cisco, CrowdStrike, Fortinet, Microsoft, Palo Alto Networks, SentinelOne, Sophos, TEHTRIS, Trend Micro and VMWare. Enhancing LLMs with telemetry and human-annotated data defines the future of endpoint security.


Blockchain transparency is a bug

Transparency isn’t a feature of decentralization that is truly needed to perform on-chain transactions securely — it’s a bug that forces Web3 users to expose their most sensitive financial data to anyone who wants to see it. Several blockchain marketing tools have emerged over the past few years, allowing marketers and salespeople to use the freely flowing on-chain data for user insights and targeted advertising. But this time, it’s not just behavioral data that is analyzed. Now, your most sensitive financial information is also added to the mix. Web3 will never become mainstream unless we manage to solve this transparency problem. Blockchain and Web3 were an escape from centralized power, making information transparent so that centralized entities cannot own one’s data. Then 2020 came, Web3 and NFTs boomed, and many started talking about how free flowing, available-to-all data is a clear improvement from your data being “stolen” by big data companies as a customer. Some may think if everyone can see the data, transparency will empower users to take ownership of and profit from their own data. Yet, transparency does not mean data can’t be appropriated nor that users are really in control.


Key Considerations to Effectively Secure Your CI/CD Pipeline

Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - August 30, 2023

Generative AI Faces an Existential IP Reckoning of Its Own Making

Clearly, this situation is untenable, with a raft of dire consequences already beginning to emerge. Should the courts determine that generative AI firms aren’t protected by the fair use doctrine, the still-budding industry could be on the hook for practically limitless damages. Meanwhile, platforms like Reddit are beginning to aggressively push back against unchecked data scraping. ... These sorts of unintended externalities will only continue to multiply unless strong measures are taken to protect copyright holders. Government can play an important role here by introducing new legislation to bring IP laws into the 21st century, replacing outdated regulatory frameworks created decades before anyone could have predicted the rise of generative AI. Government can also spur the creation of a centralized licensing body to work with national and international rights organizations to ensure that artists, content creators, and publishers are being fairly compensated for the use of their content by generative AI companies.


6 hidden dangers of low code

The low-code sales pitch is that computers and automation make humans smarter by providing a computational lever that multiplies our intelligence. Perhaps. But you might also notice that, as people grow to trust in machines, we sometimes stop thinking for ourselves. If the algorithm says it’s the right thing to do, we'll just go along with it. There are endless examples of the disaster that can ensue from such thoughtlessness. ... When humans write code, we naturally do the least amount of work required, which is surprisingly efficient. We're not cutting corners; we're just not implementing unnecessary features. Low code solutions don’t have that advantage. They are designed to be one-size-fits-all, which in computer code means libraries filled with endless if-then-else statements testing for every contingency in the network. Low code is naturally less efficient because it’s always testing and retesting itself. This ability to adjust automatically is the magic that the sales team is selling, after all. But it’s also going to be that much less efficient than hand-tuned code written by someone who knows the business.


Applying Reliability Engineering to the Manufacturing IT Environment

To understand exposure to failure, the Reliability Engineers analyzed common failure modes across manufacturing operations, utilizing the Failure Mode and Effects Analysis (FMEA) methodology to anticipate potential issues and failures. Examples of common failure modes include “database purger/archiving failures leading to performance impact” and “inadequate margin to tolerate typical hardware outages.” The Reliability Engineers also identified systems that were most likely to cause factory impact due to risk from these shared failure modes. This data helped inform a Resiliency Maturity Model (RMM), which scores each common failure mode on a scale from 1 to 5 based on a system’s resilience to that failure mode. This structured approach enabled us to not just fix isolated examples of applications that were causing the most problems, but to instead broaden our impact and develop a reliability mindset. 


5 Skills All Marketing Analytics and Data Science Pros Need Today

Marketing analysts should hone their skills to know who to talk to – and how to talk to them – to secure the information they have. Trust Insights’ Katie Robbert says it requires listening and asking questions to understand what they know that you need to take back to your team, audience, and stakeholders. “You can teach anyone technical skills. People can follow the standard operating procedure,” she says. “The skill set that is so hard to teach is communication and listening.” ... By improving your communication skills, you’ll be well-positioned to follow Hou’s advice: “Weave a clear story in terms of how marketing data could and should guide the organization’s marketing team.” She says you should tell a narrative that connects the dots, explains the how and where of a return on investment, and details actions possible not yet realized due to limited lines of sight. ... Securing organization-wide support requires leaning into what the data can do for the business. “Businesspeople want to see the business outcomes. 


Neural Networks vs. Deep Learning

Neural networks, while powerful in synthesizing AI algorithms, typically require less resources. In contrast, as deep learning platforms take time to get trained on complex data sets to be able to analyze them and provide rapid results, they typically take far longer to develop, set up and get to the point where they yield accurate results. ... Neural networks are trained on data as a way of learning and improving their conclusions over time. As with all AI deployments, the more data it’s trained on the better. Neural networks must be fine-tuned for accuracy over and over as part of the learning process to transform them into powerful artificial intelligence tools. Fortunately for many businesses, plenty of neural networks have been trained for years – far before the current craze inspired by ChatGPT – and are now powerful business tools. ... Deep learning systems make use of complex machine learning techniques and can be considered a subset of machine learning. But in keeping with the multi-layered architecture of deep learning, these machine learning instances can be of various types and various strategies throughout a single deep learning application.


Ready or not, IoT is transforming your world

At its core, IoT refers to the interconnection of everyday objects, devices, and systems through the internet, enabling them to collect, exchange, and analyze data. This connectivity empowers us to monitor and control various aspects of our lives remotely, from smart homes and wearable devices to industrial machinery and city infrastructure. The essence of IoT lies in the seamless communication between objects, humans, and applications, making our environments smarter, more efficient, and ultimately, more convenient. ... Looking ahead, the future of IoT holds remarkable potential. Over the next five years, we can expect a multitude of advancements that will reshape industries and lifestyles. Smart cities will continue to evolve, leveraging IoT to enhance sustainability, security, and quality of life. The healthcare sector will witness even more personalized and remote patient monitoring, revolutionizing the way medical care is delivered. AI and automation will play a pivotal role, in driving efficiency and innovation across various domains.


What are network assurance tools and why are they important?

Without a network assurance tool at their disposal, many enterprises would be forced to limit their network reach and capacity. "They would be unable to take advantage of the latest technological advancements and innovations because they didn’t have the manpower or tools to manage them," says Christian Gilby, senior product director, AI-driven enterprise, at Juniper Networks. "At the same time, enterprises would be left behind by their competitors because they would still be utilizing manual, trial-and-error procedures to uncover and repair service issues." The popularity of network assurance technology is also being driven by a growing enterprise demand for network teams to do more with less. "Efficiency is needed in order to manage the ever-expanding network landscape," adds Gilby. New devices and equipment are constantly brought online and added to networks. Yet enterprises don’t have unlimited IT budgets, meaning that staffing levels often remain the same, even as workloads increase.


How tomorrow’s ‘smart cities’ will think for themselves

In the smart cities of the future, technology will be built to respond to human needs. Sustainability is the biggest problem facing cities – and by far the biggest contributor is the automobile. Smart cities will enable the move towards reducing traffic, and towards autonomous vehicles directed efficiently through the streets. Deliveries which are not successful the first time are one example. These are a key driver of congestion, as drivers have to return to the same address repeatedly. In a cognitive city, location data that shows when a customer is home can be shared anonymously with delivery companies – with their consent – so that more deliveries arrive on the first attempt. Smart parking will be another important way to reduce congestion and make the streets more efficient. Edge computing nodes will sense empty parking spaces and direct cars there in real-time. They will also be a key enabler for autonomous driving, delivering more data points to autonomous systems in cars. 


Navigating Your Path to a Career in Cyber Security: Practical Steps and Insights

Practical experience is critical in the field of cyber security. Seek opportunities to apply your knowledge and gain hands-on experience as often as you can. I recommend looking for internships, part-time jobs, or volunteer positions that allow you to work on real-world projects and develop practical skills. I cannot stress how important it is to understand the fundamentals. ... Networking is essential for finding job opportunities in any field, including cybersecurity. You should attend industry events and conferences (there are plenty of free ones) and try to meet as many professionals already working in the field as possible. Their insights will go a long way in your journey to finding the right role. There are also many online communities and forums you can join where cyber security experts gather to discuss trends, share knowledge, and explore job opportunities. Networking will help you gain insights, discover job openings, and even receive recommendations from industry professionals.


NCSC warns over possible AI prompt injection attacks

Complex as this may seem, some early developers of LLM-products have already seen attempted prompt injection attacks against their applications, albeit generally these have been either rather silly or basically harmless. Research is continuing into prompt injection attacks, said the NCSC, but there are now concerns that the problem may be something that is simply inherent to LLMs. This said, some researchers are working on potential mitigations, and there are some things that can be done to make prompt injection a tougher proposition. Probably one of the most important steps developers can take is to ensure they are architecting the system and its data flows so that they are happy with the worst-case scenario of what the LLM-powered app is allowed to do. “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” wrote the NCSC team.



Quote for the day:

"When you practice leadership, the evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - January 21, 2023

Is Your Innovation Project Condemned To Succeed?

The challenge in most organizations is that leaders are looking to make big bets on a few projects. These bets are typically based on asking innovation teams to create a business case before they receive investment. A business case showing good returns will receive investment with the expectation that it will succeed. The team is given no room for failure. ... This problem is exacerbated if your team has received a large investment to work on the project. Most innovation teams lose the discipline to test their ideas if they have large budgets to spend. In most cases they burn through the money while executing on their original idea. By the time they learn that the idea may not work, they have already spent millions of dollars. At this point, admitting failure is career suicide. ... Imagine being the CEO’s pet project, having a large investment and then being publicly celebrated as a lighthouse project before you have made any money for the company. This public celebration of a single innovation project puts a lot of pressure on innovation teams to succeed. 


Which cloud workloads are right for repatriation?

Look at the monthly costs and values of each platform. This is the primary reason we either stay put on the cloud or move back to the enterprise data center. Typically the workload has already been on the cloud for some time, so we have a good understanding of the costs, talent needed, and other less-quantifiable benefits of cloud, such as agility and scalability. You would think that these are relatively easy calculations to make, but it becomes complex quickly. Some benefits are often overlooked and architects make mistakes that cost the business millions. All costs and benefits of being on premises should be considered, including the cost of the humans needed to maintain the platforms (actual hardware and software), data center space (own or rent), depreciation, insurance, power, physical security, compliance, backup and recovery, water, and dozens of other items that may be specific to your enterprise. Also consider the true value of agility and scalability that will likely be lost or reduced if the workloads return to your own data center.


Network automation: What architects need to know

It's great to strive for an automation-first culture and find innovative ways to use technology as a competitive advantage, but I recommend first targeting low-risk, high-reward tasks. Try to create reusable building blocks to operate more efficiently. One example is automating the collection and parsing of operational data from the network, such as routing protocol sessions state, VPN service status, or other relevant metrics to produce actionable or consumable outputs. Gathering this information is a read-only activity, so the risk is low. The reward is high because this task is a time-consuming, repetitive process. Also, you can use this data for various purposes, such as creating reports, running audits, filling in trouble tickets, performing pre-and post-checks during maintenance windows, and so on. You don't need to wait until you get everything right to start. Improve on your automation solution iteratively. Small initial steps can make a big difference in your network. For example, for the data collection example above, you don't need the full list of key performance indicators (KPIs) on day 1; your users will let you know what you're missing over time.


Finding Adequate Metrics for Outer, Inner, and Process Quality in Software Development

Quite an obvious criteria for outer quality is the question of if the users like the product. If your product has customer support, you could simply count the number of complaints or contacts. Additionally, you can categorize these to gain more information. While this is in fact a lot of effort and far from trivial, it is a very direct measure and might yield a lot of valuable information on top. One problem here is selection bias. We are only counting those who are getting in contact, ignoring those who are not annoyed enough to bother (yet). Another similar problem is survivorship bias. We ignore those users who simply quit due to an error and never bother to get in contact. Both biases may lead us to over-focus on issues of a complaining minority, while we should rather further improve things users actually like about the product. Besides these issues, the complaint rate can also be gamed: simply make it really hard to contact customer support by hiding contact information or increase waiting time in the queue.


Platform Engineering Won’t Kill the DevOps Star

“The movement to ‘shift left’ has forced developers to have an end-to-end understanding of an ever-increasing amount of complex tools and workflows. Oftentimes, these tools are infrastructure-centric, meaning that developers have to be concerned with the platform and tooling their workloads run on,” Humanitec’s Luca Galante writes in his platform engineering trends in 2023, which demands more infrastructure abstraction. Indeed, platform engineering could be another name for cloud engineering, since so much of developers’ success relies on someone obscuring the complexity out of the cloud — and so many challenges are found in that often seven layers thick stack. Therefore you could say platform engineering takes the spirit of agile and DevOps and extends it within the context of a cloud native world. She pointed to platform engineering’s origins in Team Topologies, where “the platform is designed to enable the other teams. The key thing about it is kind of this self-service model where app teams get what they want from the platform to deliver business value,” Kennedy said.


The Concept of Knowledge Graph, Present Uses and Potential Future Applications

A knowledge graph is a database that uses a graph structure to represent and store knowledge. It is a way to express and organize data that is easy for computers to understand and reason about and which can be used to perform tasks such as answering questions or making recommendations. The graph structure consists of nodes, which represent entities or concepts, and edges, which represent relationships between the nodes. For example, a node representing the concept "Apple" might have advantages over nodes representing the concepts "Fruit," "Cupertino, California," and "Tim Cook," which represent relationships such as "is a type of," "is located in," and "has a CEO of," respectively. In a knowledge graph, the relationships between nodes are often explicitly defined and stored, which allows computers to reason about the data and make inferences based on it. This is in contrast to traditional databases, which store data in tables and do not have direct relationships between the data points.


4 tips to broaden and diversify your tech talent pool

Apprenticeships are extremely valuable for both employers and candidates. For employers, apprenticeships are a cost-effective way to groom talent, providing real-world training and a skilled employee at the end of the program. Apprenticeship programs also reduce the ever-present risk of hiring a full-time entry-level employee, who may prove to not be up to the required standard or decide for themselves that the organization or industry is not a fit. For workers, an apprenticeship is essentially a crash course providing the opportunity to earn while they learn. With the average college graduate taking on $30,000 in debt (and many taking on much more), a degree has increasingly become out of financial reach for many Americans. Apprenticeships are an excellent way for people to gain tangible work experience and applicable skills while also providing a trial run to determine whether a career in cybersecurity is right for them. For me, apprenticeship programs are a true win-win. During National Apprenticeship Week this year, we joined the Department of Labor’s event at the White House to celebrate the culmination of the 120-day Cybersecurity Apprenticeship Sprint. 


Debugging Threads and Asynchronous Code

Let’s discuss deadlocks. Here we have two threads each is waiting on a monitor held by the other thread. This is a trivial deadlock but debugging is trivial even for more complex cases. Notice the bottom two threads have a MONITOR status. This means they’re waiting on a lock and can’t continue until it’s released. Typically, you’d see this in Java as a thread is waiting on a synchronized block. You can expand these threads and see what’s going on and which monitor is held by each thread. If you’re able to reproduce a deadlock or a race in the debugger, they are both simple to fix. Stack traces are amazing in synchronous code, but what do we do when we have asynchronous callbacks? Here we have a standard async example from JetBrains that uses a list of tasks and just sends them to the executor to perform on a separate thread. Each task sleeps and prints a random number. Nothing to write home about. As far as demos go this is pretty trivial. Here’s where things get interesting. As you can see, there’s a line that separates the async stack from the current stack on the top. 


3 requirements for developing an effective cloud governance strategy

Governance is not a one-size-fits-all proposition, and each organization may prefer a different approach to governance depending on its objectives. Digital transformation is no longer a novel concept. But continuous innovation is required to improve and remain competitive, making automation critical for operational efficiency. According to IDC's Worldwide Artificial Intelligence and Automation 2023 Predictions, AI-driven features are expected to be embedded across business technology categories by 2026, with 60% of organizations actively utilizing such features to drive better outcomes. Automation is critical for increasing efficiency in cloud management operations, such as billing and cost transparency, right-sizing computer resources, and monitoring cost anomalies. The use of automated tools can improve security, lower administrative overhead, decrease rework, and lower operational costs. Definable metrics and key performance indicators (KPIs) can be used to assess outcomes with the right cost transparency tool. ... Automation can also aid in resolving personnel issues, which can cause migration projects to stall.


Styles of machine learning: Intro to neural networks

What makes the neural network powerful is its capacity to learn based on input. This happens by using a training data set with known results, comparing the predictions against it, then using that comparison to adjust the weights and biases in the neurons. ... A common approach is gradient descent, wherein each weight in the network is isolated via partial derivation. For example, according to a given weight, the equation is expanded via the chain rule and fine-tunings are made to each weight to move overall network loss lower. Each neuron and its weights are considered as a portion of the equation, stepping from the last neuron(s) backwards. You can think of gradient descent this way: the error function is the graph of the network's output, which we are trying to adjust so its overall shape (slope) lands as well as possible according to the data points. In doing gradient backpropagation, you stand at each neuron’s function and modify it slightly to move the whole graph a bit closer to the ideal solution. The idea here is that you consider the entire neural network and its loss function as a multivariate equation depending on the weights and biases.



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin

Daily Tech Digest - August 09, 2022

Deepfakes Grow in Sophistication, Cyberattacks Rise Following Ukraine War

The use of deepfakes to evade security controls and compromise organizations is on the rise among cybercriminals, with researchers seeing a 13% increase in the use of deepfakes compared with last year. That's according to VMware's eighth annual "Global Incident Response Threat Report," which says that email is usually the top delivery method. The study, which surveyed 125 cybersecurity and incident response (IR) professionals from around the world, also reveals an uptick in overall cybersecurity attacks since Russia's invasion of Ukraine; extortionary ransomware attacks including double extortion techniques, data auctions, and blackmail; and attacks on APIs. "Attackers view IT as the golden ticket into an organization's network, but unfortunately, it is just the start of their campaign," explains Rick McElroy, principal cybersecurity strategist at VMware. "The SolarWinds attack gave threat actors looking to target vendors a step-by-step manual of how to successfully pull off an attack." He says that keeping this in mind, IT and security teams need to work hand in hand to ensure all access points are secure to prevent an attack like that from harming their own organization.


How CFOs and CISOs Can Build Strong Partnerships

“There is no substitute for regular communication,” he said. “In addition to the formal, structured channels, I have found it most helpful to just talk to Lena and her team about key initiatives, any issues concerning them, and overall trends in security and the business more broadly.” If possible, conversations between the CISO and chief financial officer should also include the chief privacy officer, said Raj Patel, partner and cybersecurity practice leader at consulting firm Plante Moran. “Each has a role in protecting data and assets,” he said. “The conversation can start simply by scheduling a meeting around it.” These talks should take place at least quarterly, according to Patel, and should not be focused solely on the budget. “We don’t fight a war on budgets but do what we need to defend ourselves,” he said. “When our organizations get attacked every day, we are in a war. Many finance executives focus on a budget and at times compare it to prior budgets. When it comes to cybersecurity, the focus needs to be on risk, and allocating financial resources should be based on risk.”


The cloud ate my database

The first version of PostgreSQL was released in 1986, and MySQL followed less than a decade later in 1995. Neither displaced the incumbents—at least, not for traditional workloads. MySQL arguably took the smarter path early on, powering a host of new applications and becoming the “M” in the famous LAMP stack (Linux, Apache, MySQL, PhP/Perl/Python) that developers used to build the first wave of websites. Oracle, SQL Server, and DB2, meanwhile, kept to their course of running the “serious” workloads powering the enterprise. Developers loved these open source databases because they offered freedom to build without much friction from traditional gatekeepers like legal and purchasing. Along the way, open source made inroads with IT buyers, as Gartner showcases. Then the cloud happened and pushed database evolution into overdrive. Unlike open source, which came from smaller communities and companies, the cloud came with multibillion-dollar engineering budgets, as I wrote in 2016. Rather than reinvent the open source database wheel, the cloud giants embraced databases such as MySQL and turned them into cloud services like Amazon RDS.


Everything CISOs Need to Know About NIST

When it comes to protecting your data, NIST is the gold standard. That said, the government does not mandate it for every industry. CISOs should comply with NIST standards, but business leaders can handle risk management with whichever approach and standards they believe will best suit their business model. However, federal agencies must use these standards. As the U.S. government endorses NIST, it came as little surprise when Washington declared these standards the official security control guidelines for information systems at federal agencies in 2017. Similarly, if CISOs work with the federal government as contractors or subcontractors, they must follow NIST security standards. With that in mind, any contractor who has a history of NIST noncompliance may be excluded from future government contracts. The Cybersecurity Framework is one of the most widely adopted standards from NIST. While optional, this framework is a trusted resource that many companies adhere to when attempting to reduce risk and improve their cybersecurity systems and management. 


What Does The Future Hold For Serverless?

In production-level serverless applications, monitoring your application is paramount to your success. You need to know if you’ve dropped any events, where the bottlenecks are, and if items are piling up in dead letter queues. Not to mention you need the ability to trace a transaction end to end. This is an area that is finally beginning to take off. As more and more serverless production workloads are coming online, it is becoming increasingly obvious there’s a gap in this space. Vendors like DataDog, Lumigo, and Thundra all attempt to solve this problem - with pretty good success. But it needs to be better. In the future we need tools like what the vendors listed above offer, but with optimization and insights built-in like AWS Trusted Advisor. We need app monitoring to evolve. When we hear application monitoring, we need to assume more than service graphs and queue counts. Application monitoring will become more than fancy dashboards and slack messages. It will eventually tell us we provisioned the wrong infrastructure from the workload it sees.


Cybersecurity on the board: How the CISO role is evolving for a new era

More and more businesses agree. Gartner's survey of board directors found that 88% view cybersecurity as not only a technical problem for IT departments to solve, but a fundamental risk to how their businesses operate. That’s hardly surprising, given the recent history of hacks against private businesses. ... Ensuring the CISO has a seat on the board is one way of ensuring a company has a firm handle on how to handle these risks to the business. Even so, says Andrew Rose, resident CISO at security company Proofpoint, they should be careful in how they communicate their concerns. “The 'sky is falling' narrative can be used once or twice, but after that, the board will become a bit numb to it all,” Rose explains. Forcing boards to prioritise cybersecurity should instead be done through positive affirmation, argues Carson - and, ideally, be framed in how shoring up the company’s defences will help it perform better in the long term. “You need to show them how this is going to help the business be successful, how it will help employees to do their jobs better, provide value to the shareholders, [and] return an investment,” he says.


Neuro-symbolic AI brings us closer to machines with common sense

Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades. Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems. Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and approximating functions and more toward the idea of all the things the human mind does when you’re modeling the world, explaining and understanding the things you’re seeing, imagining things that you can’t see but could happen, and making them into goals that you can achieve by planning actions and solving problems?”


IT Security Decision-Makers Struggle to Implement Strategies

While businesses still have many privileged identities left unprotected, such as application and machine identities, attackers will continue to exploit and impact business operations in return for a ransom payment, Carson said. "The good news is that organizations realize the high priority of protecting privileged identities," he added. "The sad news is that many privileged identities are still exposed as it is not enough just to secure human privileged identities." ... The security gap is not only increasing between the business and attackers, but also between the IT leaders and the business executives, according to Carson. "While in some industries this is improving, the issue still exists," he said. "Until we solve the challenge on how to communicate the importance of cybersecurity to the executive board and business, IT security decision-makers will continue to struggle to get the needed resources and budget to close the security gap." From Carson's perspective, that means there needs to be a change in the attuite at the C-suite level.


GraphQL is a big deal: Why isn’t it the industry standard for database querying?

What if you could leverage the expressive attributes of SQL and the flexibility of GraphQL at the same time? There are technologies available that claim to do that, but they are unlikely to become popular because they end up being awkward and complex. The awkwardness arises from attempting to force SQL constructs into GraphQL. But they are different query languages with different purposes. If developers have to learn how to do SQL constructs in GraphQL, they might as well use SQL and connect to the database directly. However, all is not lost. We believe GraphQL will become more expressive over time. There are proposals to make GraphQL more expressive. These may eventually become standards. But fundamentally, SQL and GraphQL have different world views, respectively: uniform backends vs. diverse backends, tables vs. hierarchical data, and universal querying vs. limited querying. Consequently, they serve different purposes. 


ESG: Building On Commitments On The E To Boost The S & The G

The first milestone could very well apply to enhancing data on anti-corruption. Challenges, of course, exist—corruption tends to be more political within organisations, and there can be hesitation to report on incidences of it. Measuring progress on reducing corruption is challenging, and indicators have to be carefully considered. For example, if the number of reported cases of crime increases in a given period, it could mean different things: anti-corruption mechanisms are working better and are well enough designed to identify corruption; people trust in the whistleblowing system and feel confident to report; or, indeed, corruption levels are going up. Nevertheless, academic scholarship and investment in anti-corruption are resulting in new indicators being developed (for example, the recently updated Index of Public Integrity [IPI] and the Transparency Index [T-Index] developed by Professor Alina Mungiu-Pippidi of the Hertie School). Collaboration with researchers and anti-corruption specialists could help design better data-collection methods.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - July 11, 2022

What Do Authentication & Authorization Mean In Zero Trust?

Authorization depends on authentication. It makes no sense to authorize a user if you do not have any mechanism in place to make sure the person or service is exactly what, or who, they say they are. Most organizations have some mechanism in place to handle authentication, and many have role-based access controls (RBAC) that group users by role, and grant or deny access based on those roles. In a zero trust system, however, both authentication and authorization are much more granular. To return to the castle analogy we explored previously, before zero trust the network would be considered a castle, and inside the castle there would be many different types of assets. In most organizations, human users would be authenticated individually — have to prove not only that they belong to a particular role, but that they are exactly the person they say they are. Service users can often also be granularly authenticated. In a RBAC system, however, each user is granted or denied access on a group basis — all the human users in the “admin” category would get blanket access, for example.


As hiring freezes and layoffs hit, is the bubble about to burst for tech workers?

Until now, the tech industry has largely sailed through the economic turbulence that has impacted other industries. Remote working and an urgency to put everything on the cloud or in an app – significantly accelerated by the pandemic – has created fierce demand for those who can create, migrate, and secure software. However, tech leaders are bracing for tough times ahead. According to recent data by CW Jobs, 85% of IT decision makers expect their organization to be impacted by the cost of doing business – including hiring freezes (21%) and pay freezes (20%). We're already seeing this play out, with Tesla, Uber and Netflix amongst the big names to have announced hiring freezes or layoffs in recent weeks. Meanwhile, Microsoft, Coinbase and Meta have all put dampeners on recruiting. If tech workers are concerned about this ongoing tightening of belts, they aren't showing it: the same CW Jobs report found that tech professionals remain confident enough in the industry that 57% expect a pay rise in the next year. Hiring freezes and layoffs don't seem to have had much impact on worker mobility, either: just 24% of professionals surveyed by CW Jobs say they plan to stay in their current role for the next 12 months. 


ERP Modernization: How Devs Can Help Companies Innovate

Many of these ERP-based companies are facing pressure to update to more modern, cloud-based versions of their ERP platforms. But they must run a gauntlet to modernize their legacy applications. In a sense, companies that maintain these complex ERP-based systems find the environments are like “golden handcuffs.” They have become so complicated over time that they restrain IT departments’ innovation efforts, hindering their ability to create supply chain resiliency when it is most needed. To make matters more difficult, the current market is facing a global shortage of human resources required to get the job of digital transformation and application modernization done, including skilled ERP developers—especially those skilled in more antiquated languages like ABAP. Incoming developer talent is often trained in more contemporary languages like Java, Steampunk and Python. These graduates have their pick of opportunities and gravitate to companies that already work in these newer programming environments. ERP migrations can be hampered by complex, customized systems developed by high-priced, silo-skilled programmers. 


Believe it or not, metaverse land can be scarce after all

As we see, technological constraints and business logic dictate the fundamentals of digital realms and the activities these realms can host. The digital world may be endless, but the processing capabilities and memory on its backend servers are not. There is only so much digital space you can host and process without your server stack catching fire, and there is only so much creative leeway you can have within these ramifications while still keeping the business afloat. These frameworks create a system of coordinates informing the way its users and investors interpret value — and in the process, they create scarcity, too. While a lot of the valuation and scarcity mechanisms come from the intrinsic features of a specific metaverse as defined by its code, the real-world considerations have just as much, if not more, weight in that. And the metaverse proliferation will hardly change them or water the scarcity down. ... So, even if they are not too impressive, they will likely be hard to beat for most newer metaverse projects, which, again, takes a toll on the value of their land. By the same account, if you have one AAA metaverse and 10 projects with zero users, investors would go for the AAA one and its lands, as scarce as they may be.


Building Neural Networks With TensorFlow.NET

TensorFlow.NET is a library that provides a .NET Standard binding for TensorFlow. It allows .NET developers to design, train and implement machine learning algorithms, including neural networks. Tensorflow.NET also allows us to leverage various machine learning models and access the programming resources offered by TensorFlow. TensorFlow is an open-source framework developed by Google scientists and engineers for numerical computing. It is composed by a set of tools for designing, training and fine-tuning neural networks.TensorFlow's flexible architecture makes it possible to deploy calculations on one or more processors (CPUs) or graphics cards (GPUs) on a personal computer, server, without re-writing code. Keras is another open-source library for creating neural networks. It uses TensorFlow or Theano as a backend where operations are performed. Keras aims to simplify the use of these two frameworks, where algorithms are executed and results are returned to us. We will also use Keras in our example below.


4 examples of successful IT leadership

IT leaders are responsible for implementing technology and data infrastructure across an organization. This can include CIOs, CTOs, and increasingly, CDOs (Chief Data Officers). To do this effectively, IT teams need employee buy-in, illustrating clearly how new technology tools and project management can benefit the company’s mission and goals. To achieve the full support of the employee base, IT teams must explain the implementation process and expected timeline. While data platforms and cloud infrastructure are important, the table stakes are tools that allow for internal communication and collaboration. Many IT teams are leveraging business process management platforms (BPMs), which help enable better collaboration between remote and in-office teams, offering a shared view of projects. These platforms allow for greater visibility and communication across organizations while reducing meeting time and improving workflow efficiencies. Technology has the potential to increase productivity, provide greater visibility of projects for employees and managers, and automate tasks that are repetitive and time-consuming.


Why 5G is the heart of Industry 4.0

The Internet of Things (IoT) is an integral part of the connected economy. Many manufacturers are already using IoT solutions to track assets in their factories, consolidating their control rooms and increasing their analytics functionality through the installation of predictive maintenance systems. Of course, without the ability to connect these devices, Industry 4.0 will, naturally, languish. While low power wide area networks (LPWAN) are sufficient for some connected devices such as smart meters that only transmit very small quantities of data, in manufacturing the opposite is true of IoT deployment, where numerous data-intensive machines are often used within close proximity. This is why 5G connectivity is key to Industry 4.0. In a market reliant on data-intensive machine applications, such as manufacturing, the higher speeds and low latency of 5G is required for effective use of automatic robots, wearables and VR headsets, shaping the future of smart factories. And while some connected devices utilised 4G networks using unlicensed spectrum, 5G allow this to take place on an unprecedented scale. 


How to Handle Authorization in a Service Mesh

A service mesh addresses the challenges of service communication in a large-scale application. It adds an infrastructure layer that handles service discovery, load balancing and secure communication for the microservices. Commonly, a service mesh complements each microservice with an extra component — a proxy often referred to as a sidecar or data plane. The proxy intercepts all traffic from and to its accompanied service. It typically uses MutualTLS, an encrypted connection with client authentication, to communicate with other proxies in the service mesh. This way, all traffic between the services is encrypted and authenticated without updating the application. Only services that are part of the service mesh can participate in the communication, which is a security improvement. In addition, the service mesh management features allow you to configure the proxy and enforce policies such as allowing or denying particular connections, further improving security. To implement a Zero Trust architecture, you must consider several layers of security. The application should not blindly trust a request even when receiving it over the encrypted wire.


DevOps nirvana is still a distant goal for many, survey suggests

"Development teams, in general, have hardly any insight into how customers benefit from their work, and few are able to discuss these benefits with the business," the authors report. "Having such insights ready at hand would improve collaboration between IT and the business. The more customer value metrics a development team tracks, the more positive that team views their working relationship with the business. Without knowing whether the intended value for the customer is being achieved or not, development teams are effectively flying blind." The LeanIX authors calculate that 53% work on a team with a 'low level' of DevOps based on maturity factors. Still, nearly 60% said that they are flexible in adapting to changing customer needs and have CI/CD pipelines set up. At the same time, less than half of engineers build, ship, or own their code or work on teams based on team topologies, indicating a lack of DevOps maturity. Fewer than 20% of respondents said that their development team was able to choose its own tech stack; 44% said they are partly able to, and 38% they are not able to at all.


Survey Shows Increased Reliance on DORA Metrics

Overall, the survey revealed just under half of the respondents (47%) said their organization had a high level of DevOps maturity, defined as having adopted three or more DevOps working methods. Those working methods are: Being flexible to changes in customer needs; having implemented a CI/CD platform; all engineers build, ship and own their own code; teams are organized around topologies and each team is free to choose its own technology stack. Of course, each individual organization will determine for itself what level of DevOps depth is required. For example, not every organization would see the need for teams to be organized around topologies or be free to choose its own technology stack. In fact, Rose said the survey made it clear that larger enterprise IT organizations tended to have a lower overall level of DevOps maturity. One reason for that is many larger organizations are still employing legacy processes to build and deploy software, noted Rose. Most developers are also further along in terms of embracing continuous integration (CI) than IT operations teams are in adopting continuous delivery (CD), added Rose.



Quote for the day:

"It is not joy that makes us grateful. It is gratitude that makes us joyful." -- David Rast