Showing posts with label BYOD. Show all posts
Showing posts with label BYOD. Show all posts

Daily Tech Digest - May 26, 2025


Quote for the day:

“Don't blow off another's candle for it won't make yours shine brighter.” -- Jaachynma N.E. Agu



Beyond single-model AI: How architectural design drives reliable multi-agent orchestration

It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. ... For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. 


Unstructured Data Management Tips

"Unlike traditional databases, which define the schema -- the data's structure -- before it's stored, schema-on-read defers this process until the data is actually read or queried," says Kamal Hathi, senior vice president and general manager of machine-generated data monitoring and analysis software firm at Splunk, a Cisco company. This approach is particularly effective for unstructured and semi-structured data, where the schema is not predefined or rigid, Hathi says. "Traditional databases require a predefined schema, which makes working with unstructured data challenging and less flexible." ... Manage unstructured data by integrating it with structured data in a cloud environment using metadata tagging and AI-driven classifications, suggests Cam Ogden, a senior vice president at data integrity firm Precisely. "Traditionally, structured data -- like customer databases or financial records -- reside in well-organized systems such as relational databases or data warehouses," he says. However, to fully leverage all of their data, organizations need to break down the silos that separate structured data from other forms of data, including unstructured data such as text, images, or log files. This is where the cloud comes into play. Integrating structured and unstructured data in the cloud allows for more comprehensive analytics, enabling organizations to extract deeper insights from previously siloed information, Ogden says. 


Why IT Certifications Are Now the Hottest Currency in Tech

The reasons are manifold. Inflation has eroded buying power, traditional merit-based raises have declined, bonuses are scarcer and 2024 saw a sharp uptick in layoffs - particularly targeting middle management and older professionals. Unlike the "Great Resignation" of 2021, professionals today are staying put - not from loyalty but from caution, and upskilling is the key to ensure their longevity. Faced with a precarious job market and declining benefits, many IT employees are opting for stability and doubling down on internal mobility. According to the Pearson VUE's 2025 Value of IT Certification Candidate Report, more than 80% of the respondents who hold at least one certification said it enhanced their ability to innovate and 70% said they experienced greater autonomy at workplace. Even in regions where pay bumps are smaller, the career mobility afforded by certifications is prevalent. In India, for instance, CloudThat's IT Certifications and Salary Survey found that Microsoft-certified professionals earn an average entry salary of $10,900, with 60% of certified workers reporting pay hikes. "The increased value in certifications underscore their critical role in equipping professionals with the skills needed to excel and advance in their roles. As the industry continues to grow, certifications are becoming essential to stand out and meet the demand for specialized skills," said Bhavesh Goswami, founder and CEO of CloudThat.


Speed and scalability redefine the future of modern banking

To expedite digitalisation, global policymakers are introducing regulations such as India’s Digital Banking Units (DBUs), the EU’s PSD2/PSD3 directives, and the GCC’s open finance guidelines. The growth in non-bank financial intermediaries (NBFIs), which has been both more intricate and wider in scope, in the most recent years, obliges the employing of more effective compliance frameworks and the introduction of better risk management strategies. ... Integrating banking directly into non-financial platforms such as e-commerce is on the rise. Based on a report by Grand View Research, the global Banking-as-a-Service (BaaS) market is expected to reach USD 66 billion by 2030. Retailers increasingly partner with banks for instant, personalised offers and payments via identity beacons, enhancing customer experiences through Gen AI-supported interactions. For example, real-time data analytics and machine learning models are now essential for personalised financial services. Reimagined branch visits are becoming an emerging trend, with branches shifting to high-footfall locations like malls. The store-like experience includes personalised offers and decision aids, including immediate approval for flexible loans, made possible by customer identification based on consent.


5 questions to test tech resilience and build a 90-day action plan

The convergence of AI with existing systems has brought technical debt into sharp focus. While AI, and agentic AI in particular, presents transformative opportunities, it also exposes the limitations of legacy systems and architectural decisions made in the past. It’s essential to balance the excitement of AI adoption with the pragmatic need to address underlying technical debt, as we explored in our recent research. ... While AI enthusiasm runs high, successful implementation requires careful focus on use cases that deliver tangible business value. CIOs must lead their organizations in identifying and executing AI initiatives that drive meaningful outcomes. That means defining AI programs with an holistic, end-to-end vision of how they’ll deliver value for your business. And it means taking a platform approach, as opposed to numerous isolated PoCs. ... The traditional boundaries of IT are dissolving. With technology now fundamentally driving business strategy, CIOs must lead the evolution from an IT operating model to a new business technology operating model. Recent data shows organizations that have embraced this transformation achieved 15% higher top-line performance compared to their peers, with potential for this gap to double by next year.


LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks

One particularly concerning area is the use of LLMs in coding applications. “Coding agents that rely on LLM-generated code may inadvertently introduce security vulnerabilities into production systems,” Chennabasappa warned. “Misaligned multi-step reasoning can also cause agents to perform operations that stray far beyond the user’s original intent.” These types of risks are already surfacing in coding copilots and autonomous research agents, she added, and are only likely to grow as agentic systems become more common. Yet while LLMs are being embedded deeper into mission-critical workflows, the surrounding security infrastructure hasn’t kept pace. “Security infrastructure for LLM-based systems is still in its infancy,” Chennabasappa said. “So far, the industry’s focus has been mostly limited to content moderation guardrails meant to prevent chatbots from generating misinformation or abusive content.” That approach, she argued, is far too narrow. It overlooks deeper, more systemic threats like prompt injection, insecure code generation, and abuse of code interpreter capabilities. Even proprietary safety systems that hardcode rules into model inference APIs fall short, according to Chennabasappa, because they lack the transparency, auditability, and flexibility needed to secure increasingly complex AI applications.


Navigating Double and Triple Extortion Tactics

In double extortion attacks, a second layer is added: attackers, having gained access to the system, exfiltrate sensitive and valuable data. This not only deepens the victim’s vulnerability but also increases pressure, as attackers now hold both encrypted files and stolen information, which they can use as leverage for further demands. The threat of double extortion becomes more severe as it combines operational disruption (due to encrypted data and downtime) with the risk of public exposure. Organizations unable to access their data face halted services, financial loss, and reputational damage. ... Triple extortion expands upon traditional and double extortion ransomware tactics by introducing a third layer of pressure. The attack begins with data encryption and exfiltration, similar to the double extortion model—locking the victim out of their data while simultaneously stealing sensitive information. This stolen data gives attackers multiple avenues to exploit the victim, who is left with no control over its fate. The third stage involves third-party extortion. After collecting data from the primary victim, attackers identify and target affiliated parties, such as partners, clients, and stakeholders, whose information was also compromised. 


The 7 unwritten rules of leading through crisis

Your first move shouldn’t be panic-fixing everything in silence, Young says. “You need to let people know what’s going on, including your team, your leadership, and sometimes even your customers.” Keeping everyone in the loop calms nerves and builds trust. Silence makes everything worse, Young warns. ... Confusion is contagious. “Providing clarity about what’s known, what matters, and what you’re aiming for, stabilizes people and systems,” says Leila Rao, a workplace and executive coaching consultant. “It sets the tone for proactivity instead of reactivity.” Simply treating symptoms will make the problem worse, Rao warns. “Misinformation spreads, trust erodes, and well-intentioned responses become counterproductive.” Crisis is complexity on steroids, Rao observes. “When we center people, welcome multiple perspectives, and make space for emergence, we move from crisis management to collective learning.” ... You can’t hide from a crisis, and attempting to do so only compounds the damage, Hasmukh warns. “Clear visibility into what happened allows you to respond effectively and maintain stakeholder trust during challenging times.” Organizations that delay acknowledging issues inevitably face greater scrutiny and damage than those that address situations head-on.


BYOD like it’s 2025

The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases. Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive. ... One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot.


You have seen the warnings: your next IT outage could be worse

In-band management uses the same data path as production traffic to manage the customer environment, while logically isolating management traffic from production data. Although this approach can be more cost-effective, it introduces certain risks. If a problem occurs with the production network, it can also disrupt management access to the infrastructure, a situation referred to as “fate sharing.” In these cases, the only viable solution may be to send an engineer onsite to diagnose and resolve the issue. This can result in significant costs and delays, potentially impacting the customer’s business operations. Out-of-band management, on the other hand, uses a separate network to provide independent access for managing the infrastructure, completely isolating management traffic from the production network. This separation is crucial during major disruptions like provider outages or security breaches, as it guarantees continuous access to network devices and servers, even if the primary production network is down or compromised. ... A secure connection links this cloud infrastructure to the customer’s on-premises IT setup, usually through a dedicated private network connection, SD-WAN, or an IPSEC VPN. This connection typically terminates at an on-premises router or firewall, safeguarding access to the out-of-band management network. 

Daily Tech Digest - September 04, 2023

What happens when finops finds bad cloud architecture?

Cloud finops teams can evaluate the performance and scalability of cloud infrastructure. Monitoring key performance indicators such as response times, latency, and throughput can identify bottlenecks or areas where the current architecture limits scalability and performance. Since finops normally tracks this through money spent, it’s easy to determine exactly how much architecture blunders are costing the company. It’s not unusual to find that a cloud-deployed system costs 10 times more money per month than it should. Those numbers are jarring for most businesses. Remember, all that money could have been spent in other places, such as on innovations. ... However, there are more strategic blunders, such as only using a single cloud provider (see example above). Maybe it seemed like a good idea at the time. Perhaps a vendor had a relationship with several board members, or there were political reasons for the limited choices. Unfortunately, the company still ends up with a great deal of technical debt which could have been avoided.


The quantum threat: Implications for the Internet of Things

Quantum computing, though it might be a decade or two away, presents a threat to IoT devices that have been secured against the current threat and which may remain in place for many years. To address this threat, governments are already spending billions, while organisations like NIST and ETSI are several years into programmes to identify and select post-quantum algorithms (PQAs) and industry and academia are innovating. And we are approaching some agreement on a suite of algorithms that are probably quantum safe; both the UK’s NCSC and the US’ NSA endorse the approach of enhanced Public Key cryptography using PQA along with much larger keys. The NCSC recommends that the majority of users follow normal cyber security best practice and wait for the development of NIST standards-compliant quantum-safe cryptography (QSC) products. That potentially leaves the IoT with a problem. Most of these enhanced QSC standards appear to require considerable computing power to deal with complex algorithms and long keys – and many IoT sensors may not be capable of running them.


What is industry cloud?

Industry cloud platforms allow businesses operating in the same sector to share or sell data, technologies, and processes to each other. The potential benefits can be significant, as an industry cloud enables interrelated members of a supply chain to access insights derived from potentially expanded data sets. An industry cloud can offer companies an exciting opportunity to exploit existing data they are not leveraging in a constructive way. ... Joining an industry cloud can offer significant benefits for companies, but many may reflexively balk at the idea of sharing or selling data. Consequently, it’s important that a company has a supportive constituency when considering an industry cloud. Each type of vendor has its own challenges in developing an industry cloud platform. For industry clouds driven by supply chain leaders, the most important requirement will be reexamining tools and methodologies to meet the needs of less sophisticated supply chain participants. Avoiding the temptation to abandon the industry cloud and retreat to a standard cloud for internal use is also a challenge.


Why Instagram Threads is a hotbed of risks for businesses

Threads is very easy to both download and sign up for, as it integrates seamlessly with a user's Instagram account when first signing up for the platform. However, this seamless integration could pose security risks, according to a blog from AgileBlue. Instagram, Facebook, and now Threads are all owned by Meta and for many users, each of their Meta accounts share the same login credentials between each of the platforms. "This makes it much easier for malicious actors to access information as gaining access to just one account ultimately gives them access to all Meta accounts," the blog said. In fact, as of writing, only users with an Instagram account can create a Threads account, so if an individual wants to sign up for Threads, they will first have to create an Instagram account. "If an employee's Threads account is compromised, malicious actors can impersonate the employee to gather information or spread misinformation within their close circle," Guenther says.


With BYOD comes responsibility — and many firms aren't delivering

Management must learn and share the benefits of these systems, make it crystal clear how data will be handled, and put protection in place to ensure personal data remains personal. Communication is critical here. It's also critical in securing the inevitable weak point of any form of security protection — the users themselves. With that in mind, companies should invest in training staff in security awareness and encourage them to update devices as and when those updates appear. Companies should also set standards — and devices that don’t meet those standards, in terms of security protection, should not gain access to corporate systems. This is all common sense stuff, really. We know the security environment is extremely challenging — even police forces are regularly hacked. In that context, it makes total sense to think about how to manage the devices connected to your systems and to put in place the software, security, and user education it takes to protect your business environments. The cost of device management is relatively negligible compared to the consequences of a successful ransomware attack, after all.


Why Enterprise Architecture Must Drive Sustainable Transformation

To some, it may seem odd to present these as parallel, equivalent pressures on businesses. Surely, the continued viability of civilization as we know it should far outweigh any governmental or regulatory proposal in our thinking about the future? The importance of the changing regulatory environment, however, lies not just in its ability to trigger business action: it is a real opportunity for businesses to transform themselves to a more meaningful, consequential sustainability approach. A report co-authored by the WEF and Boston Consulting Group, ‘Net-Zero Challenge: The supply chain opportunity’, found that the supply chains of just eight sectors, including food, construction, and fashion, account for more than 50% of global emissions. It also found that 40% of the emissions could be abated with already-available measures like circular manufacturing and renewable energy. Even achieving net zero emissions in those supply chains, according to the report’s investigations, would only raise costs for end-consumers by 1%-4% on average.


Lean for the modern company

A strong esprit de corps among team members has also long been critical to support healthy growth and the creation of synergistic value, and the book emphasizes the importance of building a healthy culture able to support lean processes and outcomes. This section includes clever material on nurturing a culture of experimentation and discovery, and validating trust by constantly raising the bar on deliverables and expectations. May and Dominguez ground their principles in the core lean ideal of starting with value and working backward—focusing obsessively on improving operations to seamlessly deliver for the customer what they call the “Job to be Done.” The authors’ material on accelerating value creation recapitulates this goal and reminds readers to be vigilant about combating the inevitable waste generated by successful companies. As a writer about lean for nearly two decades, I’ve often been frustrated by misrepresentations of this dynamic system by management gurus who tout only thin-sliced elements of it.


4 Key Observability Best Practices

For cost reasons, becoming comfortable with tracking the current telemetry footprint and reviewing options for tuning — like dropping data, aggregating or filtering — can help your organization better monitor costs and platform adoption proactively. The ability to track telemetry volume by type (metrics, logs, traces or events) and by team can help define and delegate cost-efficiency initiatives. Once you’ve gotten a handle on how much telemetry you’re emitting and what it’s costing you, consider tracking the daily and monthly active users. This can help you pinpoint which engineers need training on the platform. ... Teams need better investigations. One way to ensure a smoother remediation process is through an organized process like following breadcrumbs rather than having 10 different bookmark links and a mental map of what data lives where. One way to do this is by understanding what telemetry your system emits from metrics, logs and traces and pinpointing the potential duplication or better sources of data.


Software Engineering in the Age of Climate Change: A Testing Perspective

Regression testing confirms that new code does not break existing functionality. Preventing regressions reduces the need for repeated testing and bug fixes, optimizing the software development lifecycle and minimizing unnecessary computational resources. ... Online education platforms introduce new features to enhance user experiences. Regression testing ensures these changes do not disrupt existing lessons or content delivery. By maintaining stability, energy is saved by minimizing the need for post-deployment fixes. Suppose a telecommunications company is rolling out a software update for its network infrastructure to improve data transmission efficiency and reduce latency. The update includes changes to the routing algorithms used to direct data traffic across the network. While the primary goal is to enhance network performance, there is a potential risk of introducing regressions that could disrupt existing services. Before deploying the software update to the entire network, the telecommunications company conducts thorough regression testing.


How to make your developer organization more efficient

Automating manual tasks and repetitive processes is crucial for increasing developer efficiency. “Employing automation for tasks that many engineers face throughout their SDLC helps to shift focus towards human value-add activities. This also increases overall delivery throughput, with higher confidence in our development lifecycle, and produces consistent processes across teams that would otherwise be handled one-off and uniquely” said Joe Mills. Developers can engage a team of automation experts to assess certain processes and tasks and help uncover automation opportunities. The team uses a hub-and-spoke model to scale their efforts across development teams at Discover and can help teams with robotic process automation, business automation, or code automation. ... In addition to these initiatives, engineers at Discover adhere to a set of practices, internally called CraftWorx, that define and direct the agile development process. Aligning engineers across these practices reduces friction because engineers and developers are following the same development practices.



Quote for the day:

"A leader takes people where they would never go on their own." -- Hans Finzel

Daily Tech Digest - August 03, 2022

Why the future of APIs must include zero trust

Devops leaders are pressured to deliver digital transformation projects on time and under budget while developing and fine-tuning APIs at the same time. Unfortunately, API management and security are an afterthought when the devops teams rush to finish projects on deadline. As a result, API sprawl happens fast, multiplying when all devops teams in an enterprise don’t have the API Management tools and security they need. More devops teams require a solid, scalable methodology to limit API sprawl and provide the least privileged access to them. In addition, devops teams need to move API management to a zero-trust framework to help reduce the skyrocketing number of breaches happening today. The recent webinar sponsored by Cequence Security and Forrester, Six Stages Required for API Protection, hosted by Ameya Talwalkar, founder and CEO and guest speaker Sandy Carielli, Principal Analyst at Forrester, provide valuable insights into how devops teams can protect APIs. In addition, their discussion highlights how devops teams can improve API management and security.


India withdraws personal data protection bill that alarmed tech giants

The move comes as a surprise as lawmakers had indicated recently that the bill, unveiled in 2019, could see the “light of the day” soon. New Delhi received dozen of amendments and recommendations from a Joint Committee of Parliament that “identified many issues that were relevant but beyond the scope of a modern digital privacy law,” said India’s Junior IT Minister Rajeev Chandrasekhar. The government will now work on a “comprehensive legal framework” and present a new bill, he added. ... “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament 81 amendments were proposed and 12 recommendations were made towards comprehensive legal framework on digital ecosystem. Considering the report of the JCP, a comprehensive legal framework is being worked upon. Hence, in the circumstances, it is proposed to withdraw. The Personal Data Protection Bill, 2019′ and present a new bill that fits into the comprehensive legal framework,” India’s IT Minister Ashwini Vaishnaw said in a written statement Wednesday.


Don't overengineer your cloud architecture

A recent Deloitte study uncovered some interesting facts about cloud computing budgets. You would think budgets would make a core difference in how businesses leverage cloud computing effectively, but they are not good indicators to predict success. Although this could indicate many things, I suspect that money is not correlated to value with cloud computing. In many instances, this may be due to the design and deployment of overly complex cloud solutions when simpler and more cost-effective approaches would work better to get to the optimized value that most businesses seek. If you ask the engineers why they designed the solution this way (whether overengineered or not), they will defend their approach around some reason or purpose that nobody understands but them. ... This is a systemic problem now, which has arisen because we have very few qualified cloud architects out there. Enterprises are settling for someone who may have passed a vendor’s architecture certification, which only makes them proficient in a very narrow grouping of technology and often doesn’t consider the big picture.


Leveraging data privacy by design

Privacy laws and regulations, therefore, can include guidelines for facilitating industry standards, benchmarks for privacy enhancing technologies and funding privacy by design research to incentivise technology designers to enhance privacy safeguard measures in the product designs; thereby promoting technological models that are privacy savvy. The above can be better understood from the following example. For instance, the price paid for a helmet by a motorbike rider is compliance cost as it is an additional purchase requirement for safety over and above his immediate need for using a bike as a tool for commutation. However, a seat belt that is subsumed as a component of a car and not an additional requirement is perceived differently by the owner. Thus, compliance requirements that are perceived as additional obligations result in the perception of increased compliance costs whereas compliance requirements embedded in the design of the product itself are considered as part of the total product price and not separate costs. Privacy by design can thus prompt a shift in a business model whereby through the incorporation of privacy features within the technological design of the product itself


Is it bad to give employees too many tech options?

The most important question in developing (or expanding) an employee-choice model is determining how much choice to allow. Offer too little and you risk undermining the effort's benefits. Offer too much and you risk a level of tech anarchy that can be as problematic as unfettered shadow IT. There isn’t a one-size-fits-all approach. Every organization has unique culture, requirements/expectations, and management capabilities. An approach that works in a marketing firm would differ from a healthcare provider, and a government agency would need a different approach than a startup. Options also vary depending on the devices employees use — desktop computing and mobile often require differing approaches, particularly for companies that employ a BYOD program for smartphones. ... Google is making a play for the enterprise by offering ChromeOS Flex, which turns aging PCs and Macs into Chromebooks. This allows companies to continue to use machines that have dated or limited hardware, but it also means adding support for ChromeOS devices. 


Patterns and Frameworks - What's wrong?

Many people say that we should prefer libraries to frameworks and I must say that might be true. If a library could do the job you need (for example, the communication between a client and a server I presented at the beginning of the article) and meets the performance, security, protocols and any other requirements your service needs to support, then the fact we can have a "Framework" automate some class generations for us might be of minor importance, especially if such a Framework will not be able to deal with the application classes and would force us to keep creating new patterns just to convert object types. ... Yet, they fall short when dealing with app specific types and force us to either change our types just to be able to work with the framework or, when two or more frameworks are involved, there's no way out and we need to create alternative classes and copy data back and forth, doing the necessary conversions, which completely defeats the purpose of having the transparent proxies.


Where are all the technologists? Talent shortages and what to do about them

Instead of looking for that complete match, shift to 80% instead – the other 20% can almost always be met through training, support and development once in the job. Another flexibility is around age. The most sought-after candidates are in the 35-49 age bracket. But don’t rule out the under-35s or the over-50s. There are brilliant people in both groups – one with all the potential for the future, the other with invaluable experience and work knowhow. This brings us to another absolutely key approach: to invest in training and upskilling. I have one client who is looking ahead and can see that they will have a significant software development skills requirement in about four years’ time. So they are training their existing software engineers now, so they can move into these roles when the time comes. There is a growing emphasis among digital leaders on increasing the amount of internal cross-training into tech. This is something that can be applied externally, too. Look outside the business for talent that can be supported into a tech career – people who may be in other fields right now but have the right aptitude, mindset and ambition.


We’re Spending Billions Each Year on Cybersecurity. So Why Aren’t Data Breaches Going Away?

As companies invest heavily in technology, communication, and training to reduce cybersecurity risk and as they begin seeing the positive impact of those efforts, they may let their guard down—not paying as much attention to the risks, not communicating as often, or failing to ensure that new employees (or employees in new positions) are receiving the information and training they need. Cybercrooks only need to be successful once to achieve their goals, but companies need to be successful 100% of the time to avoid being compromised. Consider this: security is subject to the same natural laws that govern the rest of the universe. Entropy is real… we move from order to chaos. ... A strong security culture is a must-have to combat the continuous threats that all companies are subject to. Employees’ security awareness, behaviors and the organization’s culture must be assessed regularly. Policies and training programs should be consistently updated to address the changing threat landscape. Failure to do so puts companies at risk of data theft, business interruption, or falling victim to ransomware scams.


What is supervised machine learning?

A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels. Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles. Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them.


Machine learning creates a new attack surface requiring specialized defenses

While all adversarial machine learning attack types need to be defended against, different organizations will have different priorities. Financial institutions leveraging machine learning models to identify fraudulent transactions are going to be highly focused on defending against inference attacks. If an attacker understands the strengths and weaknesses of a fraud detection system, they can use that to alter their techniques to go undetected, bypassing the model altogether. Healthcare organizations could be more sensitive to data poisoning. The medical field has been some of the earliest adopters of using their massive historical data sets to predict outcomes with machine learning. Data poisoning attacks can lead to misdiagnosis, alter results of drug trials, misrepresent patient populations and more. Security organizations themselves are presently focusing on machine learning bypass attacks that are actively being used to deploy ransomware or backdoor networks. ... The best advice I can give to a CISO today is to embrace patterns we’ve already learned on emerging technologies.



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - October 04, 2021

4 Misconceptions about DevSecOps Every CIO Should be wary of

True DevSecOps, like DevOps, necessitates a harmonious collaboration of people, processes, and tools. It’s a culture, automation, and platform design approach that emphasizes security as a shared responsibility across the IT lifecycle. DevSecOps is, in fact, a human as well as a technical challenge. Personal development, culture, and connections with teams and managers are all critical factors in forming a successful DevSecOps team.  ... Cloud and cloud-native software and infrastructure are ideal fit for DevSecOps. It is, nonetheless, useful for a wide range of environments, particularly those who continue to apply a ten-year-old security playbook to their risk profile. Containerized cloud-native environments aren’t the only place where DevSecOps can be used. Some of the technological and process features of DevSecOps, as well as the general shift toward rapid, iterative development cycles – work well with micro-services architecture, but not as well with big monoliths’ many dependencies and extensive test cycles. However, most organizations may benefit from DevSecOps’ cultural features, particularly those that have traditionally considered security as a pre-deployment checkbox rather than a priority ingrained throughout the organization.


Are You Too Late to Start Your Data Science Journey?

What concerned me the most about being too late was not the amount of materials I needed to learn. I’d rather have doubts if I would be able to find a job by the time I learned enough. Data science was a pretty hot topic and there were quite a number of people already working in this field.In the last three years, I have been not only learning data science but also observing the dynamics of this field. My thought about being too late changed. I was not too late to start back then. Moreover, if I started learning data science today, I would not be too late either. ... The biggest challenge for those who want to make a career change to work in data science is finding the first job. I faced the same challenge and it took me about two years to land my first job. This issue is not related to if you are too late to start learning data science. The jobs are out there and increasing. However, without prior job experience, it is difficult to demonstrate your skills and convince employers or recruiters. 


3 fading and 3 future IT culture trends

Whether your IT team is remote, hybrid, or back in the office, all the pivots of 2020 made it clear just how crucial digital transformation is for business. But more than that, it’s important to have the right tech stack – one that’s simple, efficient, and centralized, not scattered or complicated. Adobe Workfront’s State of Work 2021 report indicates that 32 percent of employees have left a job due to inadequate technology that was a barrier to their workflow, and another 49 percent are likely to quit if the tech stack is frustrating or hard to use. IT leaders must scale down their technology in order to consolidate tools and software programs for maximum efficacy. ... While we’re on the subject of a centralized tech stack, let’s talk about the newer trend that has made an imprint on IT culture: the cloud-based workspace. Part of a tech solution called Infrastructure as a Service (IaaS), this digital hub is hosted in the cloud but accessible wherever there’s an internet connection. A cloud-based workspace also eliminates the need for complex hardware or equipment since workers can access it from a wireless device. 


Looking into the future of the metaverse

What will make or break the metaverse will be its ability to capture data from its surroundings and even the biosphere. The only way to do that will be by mass ingestion of the data coming from the Internet of Things. Only with this data will you be able to create a rich and meaningful environment. The next need after “seeing” will be “interacting,” meaning that the data not only needs to be represented in a meaningful way but also must be responsive. On the lowest level, equal to the physiological needs of humans in the real world, you can imagine the needs of a digital infrastructure in the metaverse: tools for ingestion of and access to data and the infrastructure to store, analyse and enrich data. But just like in the real world, before any meaningful interactions can be achieved, security needs to be guaranteed. With all the attention on the exciting possibilities of the metaverse, you could forget what infrastructures will be needed for the heavy lifting. It would have to be optimised for transferring and storing data. To make the metaverse attractive, not only would historical data need to be available, to facilitate context and depth in any interaction, but it would also have to be highly accurate.


5 Practical Steps To Protect Your Business From BYOD Security Risks

In general, personal mobile devices should not be considered the employee’s primary device – they should only be considered a convenience to access chat, email and other cloud apps when using a more secure device is not an option. Note that a VPN is needed when in a public place and an unsecure Wi-Fi network is the only option. Again, it is recommended the employee use their company-provided and managed laptop, not a personal mobile device. Many usage policies actually prohibit employees from connecting to unsecured Wi-Fi in the first place, which solves the problem. ... Another important step to protecting your business against BYOD risks is to create a list of accepted devices for accessing company data. Without a thorough list of the number of BYOD devices in use within an organization’s ecosystem, it’s extremely difficult to effectively measure and mitigate the risk that this poses. Knowing the number of personal devices being used for business tasks allows you to require specific security measures for each type of device. 


How Can Leaders Prepare for the Unexpected?

With the impacts of an inflection point clear, how do organizations operate in a timely fashion to plan and then respond? Francis said, “I tried to use the past to potentially predict future. It didn’t work. Given this, I gather all the critical players together routinely. At the same time, I let the pros do their job and I focus on clearing the way of obstructions.” To be able to do this, Young said it is "important to hire good people, empower them, give them resources they need to operate at the best of their ability, and let them do their jobs. The basics of practicing disaster recovery/business continuity should be built into organization DNA.” CIO Martin Davis claimed, “it is important to think through common scenarios and workout how you would handle them and ensure you have game plans on the shelf that can be adapted for the unexpected. Ensure you learn from previous and have practical advice ready to use and people with the right training.” To do this, Gildersleeve said the organizations needs clear definitions for who is responsible for what areas in advance of the unexpected. 


Learn the Blockchain Basics - Part 9: Blockchain Around the World

From the perspective of a technician, the blockchain is: A transactional platform and distributed accounting ledger using cryptocurrency tokens as a representation of a specific value at the current time (same as fiat). That means that a transaction is carried out by the blockchain nodes, and every member of this blockchain party has a copy of this transaction on their computer (node). Everybody verifies if the entities that are about to do a transaction have enough funds to make this transaction happen. You are basically announcing to all members of this system that you are about to make something happen and, even though this action is happening between two peers, the rest of the network verifies and records the transaction. It is a computing infrastructure that uses the power of the decentralized database with linear cell-space structure, published in a semi-public way (also known as “the block”). It’s an open-source software operating on a development platform of the future. The trust service layer, in combination with Peer to Peer (P2P) network, handles microtransactions and large-value transactions as well - allowing two users to do the same things that a bank would need to do on their behalf.


Donald Knuth on Machine Learning and the Meaning of Life

“The word open source didn’t exist at that time,” Knuth remembers, “but I didn’t want proprietary rights over it, because I saw how proprietary rights were holding things back.” Knuth remembered how IBM had allowed other companies to make their own compilers for IBM’s Fortran programming language — whereas things were different in the typography industry. “Each manufacturer had their own language for composing pages, and that was holding everything back…” But in addition, due to the success of his programming books, “I didn’t need the income! I already had a good job, and people were buying enough books that it would bring me plenty of supplemental income for everything my kids needed for education, whatever,” he said. Referring to a familiar structure in Boolean logic, Knuth quips that income “is sort of a threshold function” — that is, it basically just needs to determine whether a certain minimum has been exceeded. “And so I could specifically see the advantage of making it open for everybody…”


6 data center trends to watch

The struggle to attract and retain staff is an ongoing problem for many data-center owners and operators. Among respondents, 47% report difficulty finding qualified candidates for open jobs, and 32% say their employees are being hired away, often by competitors. In the big picture, Uptime projects that staff requirements will grow globally from about 2 million full-time employee equivalents in 2019 to nearly 2.3 million in 2025. According to Uptime: “New staff will be needed in all job roles and across all geographic regions. In the mature data-center markets of North America and Europe, there is an additional threat of an aging workforce, with many experienced professionals set to retire around the same time—leaving more unfilled jobs, as well as a shortfall of experience. An industry-wide drive to attract more staff, with more diversity, has yet to bring widespread change.” The notion of sustainability is growing in importance in the data-center sector, but most organizations don’t closely track their environmental footprint, Uptime finds. Survey respondents were asked which IT or data-center metrics they compile and report for corporate sustainability purposes. 


Combating vulnerability fatigue with automated security validation

Legacy vulnerability management tools flood security teams with long lists of community prioritized vulnerabilities – there were more than 15,000 vulnerabilities found only in 2020. Of these, only 8% were exploited by attackers. Not to mention the top 30 recently reported by CISA. Currently, it’s a cat and mouse game that the customer can never win – chasing an ever-growing list of vulnerabilities without knowing whether they fixed the ones that attackers want to target, exposed the most risk-bearing vulnerabilities, checked if there is an active exploit for a specific vulnerability, or analyzed what the possible risk and impact is that may originate from a vulnerability. All that context is required for security and IT teams to reduce the risk, maintain business continuity, and be a step ahead of the adversary. Unfortunately, the chase for more and more vulnerabilities has kept us away from the goal of where we want and need to be. At this stage of the battle with cyber adversaries, CISOs can’t go backward into the world of vulnerability fatigue.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - November 17, 2019

How banks and fintechs can balance security, safety and convenience


Craig Schleicher, who heads innovation for City National Bank in California, said he thinks the industry will see more automation in security. "As we move toward a lot more transactions being automated with triggers, we're going to see an evolution from card controls to much more dynamic and robust controls around individual transactions and what you permit without a human in the loop and what you don't. It's going to be a fun space to see evolve," he said. Schleicher said the concept of a financial institution's fiduciary relationship — doing everything in the best interest of the client — can be applied to how banks can help customers manage their identity. "We're seeing a lot of appetite for value-add services around identity theft protection and dark web monitoring," he said. "Some of these services started out with preventing financial fraud, but are now looking to protect their clients in other ways." Jig Patel, chief innovation officer for Fiserv's digital banking group, said it's imperative that fintechs and banks forge partnerships to combat security threats.


Image 11
Angular and React have many similarities and many differences. Angular is an MVC framework and structures the application very well, but you have less flexibility. React only provides the “view” in MVC – you need to solve the M and C on your own. Due to this, you can choose any of your own libraries as you see fit. Both React and Angular are component based. A component receives an input, and returns a rendered UI template as output. React’s use of a virtual DOM is to make it so fast. A virtual DOM only looks at the differences between the previous and current HTML and changes the part that is required to be updated. Angular uses a regular DOM. This will update the entire tree structure of HTML tags until it reaches the user’s age. React decided to combine UI templates and inline JavaScript/Typescript logic, which no company had ever done before. The result is called “JSX”(javascript) or “TSX” (typescript). JSX/TSX is a big advantage for development, because you have everything in one place, and code completion and compile-time checks work better.


Enterprise Architecture Alignment for the Intelligent Enterprise


In many ways, disruptive technologies are like a travel adventure – a journey beyond “business as usual” to “business unusual and unexplored.” These technologies offer opportunities to go back to basics, reimagine processes in the context of today’s realities, and recreate satisfying customer and employee experiences. Silently and gradually, disruptive technologies – such as the Internet of Things (IoT), cloud platforms, analytics, robotic process automation (RPA), artificial intelligence (AI), and machine learning – have made it to the list of must-have technologies for most progressive and innovative organizations. With the cost of devices and storage falling, the variety of available protocols and technologies is deep, and the pool of experts is growing. However, the journey from initial experimentation to full deployment of disruptive solutions requires the ability to deal with the uncertainties of a complex enterprise application landscape.


How to Use Redis TimeSeries with Grafana for Real-time Analytics


Time-series data is broadly defined as a series of data stored in time order. Examples of time-series data can range from stock prices over a period of many years to CPU performance metrics from the past few hours. Time-series data is widely used across many industry verticals. It has carved out its own category of databases, because relational, document-oriented and streaming databases do not fulfill the needs of this particular type of data. ... A typical time-series database is usually built to only manage time-series data so one of the challenges it faces is with use cases that involve some sort of computation on top of time-series data. An example would be capturing a live video feed in a time-series database. If you were to apply some sort of an AI model for face recognition, you would have to extract the time-series data, apply some sort of data transformation and then do computation. This is not ideal for a real-time use case. Multi-model databases that also manage other data models solve for these use cases where multiple data models can be manipulated in place.



According to Forbes, with time the trust factor in the capabilities of blockchain is expected to rise. The real impact of a distributed ledger is still under speculation, but given the spurt of applications already crowding the markets, it is only a matter of time before blockchain penetrates every industry sector. This universality of blockchain can be compared to “all things digital,” which Gartner predicted in 2017, and within two years that prediction turned into a formidable reality. Something that could reduce the growth period for blockchain is the existing transactional-integrity features of cryptocurrency. In near future, critical data will reside on distributed data stores — combining on-premise, cloud, and remote facilities — and blockchain will emerge as a savior for transactional integrity. According to J. Christopher Giancarlo, Chairman of U.S. Commodity Futures Trading Commission, free markets foster “creativity and economic expression to promote human growth and advancement.” This assertion comes from the belief that “sustained prosperity” is a natural byproduct of “open and competitive markets, free of political interference, combined with free enterprise, personal choice, voluntary exchange and legal protection of person and property.”


The Enterprise Architecture Versus Business Architecture Vs Business Design ...

The mission of the Architecture of the Enterprise remains crucial though. It has to integrate all disparate views and diagrams in the enterprise in one enterprise blueprint. Hence, the IT Enterprise and Business Architectures approaches need to be properly merged/ linked though so that they can deliver the entire blueprint of the enterprise. In addition, the enterprise level architects should also consider integrating all enterprise level activities that deliver process modelling, quality processes and products, non-IT schematics and engineering disciplines that ensure the trimming of the operation by measuring and adjusting the processes, and provide security, availability and scaling of the enterprise.  To sum up, the top Architect of the Enterprise should operate higher up in the enterprise hierarchy to cover the business architecture and integrate it with the technology and people architecture. This architect should ensure that it is the full blueprint of the enterprise that it is delivered rather than the IT blueprint or solutions. The architect should make sure that the audience is the whole enterprise rather than IT.


Security in the supply chain – a post-GDPR approach


The crux of the issue is this: although the GDPR sets out requirements relating to security – appropriate technical and organisational measures – it is not very prescriptive. The text is inherently legalistic and businesses are often left wondering how to apply the requirements. So, while a processor may be required to comply with the legal requirements, the processor’s view of what technical and organisational security measures are appropriate may differ from the controller’s own views. Likewise, where processors perform commoditised processing activities, they may not have sufficient knowledge of the personal data and how the controller uses it to assess the risks adequately. ... Clearly, if a processor is responsible for a security failure in breach of the GDPR, then the processor will have direct responsibility under the regulation. 


Generate TypeScript Client API for ASP.NET Web API

If you have ever developed SOAP base Web services using WCF, you might have enjoyed using the client API codes generated by SvcUtil.exe or Web Service References of Visual Studio IDE. When moving to Web API, I felt that I had got back to the Stone Age, since I had to do a lot of data type checking at design time using my precious brain power while computers should have done the job. I had developed some RESTful Web services on top of IHttpHandler/IHttpModule in 2010 for some Web services that did not handle strongly typed data but arbitrary data like documents and streams. However, I have been getting more Web projects with complex business logic and data types, and I would utilize highly abstraction and semantic data types throughout SDLC. I see that ASP.NET Web API does support highly abstraction and strongly typed function prototypes through class ApiController, and ASP.NET MVC framework optionally provides nicely generated Help Page describing the API functions.


Can Data Security And Employee Privacy Coexist In A BYOD Enterprise?

uncaptioned
The enterprise security perimeter has all but dissolved, and business apps and data are increasingly dispersed across devices and networks that companies don’t own or control. Cybercriminals have jumped on this widespread disruption to take advantage of security gaps to launch all kinds of attacks, such as phishing, man-in-the-middle, device takeovers and more. In the past, security professionals were able to lock everything down behind a firewall, but now we can’t put the genie back in the bottle. Enterprise mobility is here to stay, and it’s up to every CIO to figure out how to make enterprise data and user privacy securely coexist on employee-owned devices. We need to address these challenges head-on because enterprise mobility and BYOD trends will only continue to expand rapidly around the globe. Worldwide, the BYOD and enterprise mobility market is projected to grow by $84 billion, driven by a compounded growth of 16.3%. So while it’s clear that mobile enterprise users aren’t going back to their old PC workstations any time soon, enterprise security strategies must catch up to the rapid evolution of modern mobility ASAP.


Proof Of Work Doesn't Solve Every Blockchain Use Case

Proof-of-Work Doesn’t Solve Every Blockchain Use Case
What is Bitcoin? A question with many answers. Digital gold, magic internet money, a hedge against macro risk, tulip mania? One thing is for certain, Bitcoin found a product–market fit as a new form of money owned by the people. The Bitcoin brand is well known around the globe, the userbase is growing fast, and it continues to attract developers to the ecosystem. However, Bitcoin is not a panacea. When Satoshi first launched Bitcoin, he made design choices that were optimal for becoming a hard money with a limited attack surface at the cost of base-layer scalability and an expressive scripting language. One of those major choices was to implement a distributed proof-of-work (PoW) system to form network consensus. In other words, Bitcoin is great at being money but not very good at all the other potential use cases for a blockchain. The lesson here is that design choices come with tradeoffs, and Bitcoin has already cemented its path. This leaves room open for alternative blockchain architectures to capture value in a different market — such as supply chain management, enterprise software, social media, voting, prediction markets and more.



Quote for the day:


"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein


Daily Tech Digest - August 18, 2018

Cloud ERP isn’t a handshake deal – it’s a value extraction challenge

Chained business handshake © Andrey Popov - Fotolia.com
“Modern ERP” has the potential to change that, but I believe customers, partners, and vendors must go all-out to get there – long after go-live is over, and the handshakes and smiles are in the rearview. Virtually all the cloud ERP use cases I’ve conducted are small to midmarket. That doesn’t mean these benefits can’t extend to large enterprise, but we should acknowledge that the problem of cleansing and integrating data to achieve a single source of the truth is much tougher as companies get bigger. As for the obvious/looming question: “Can you do this with on-premise ERP?” Yes, I believe you can. “Cloud” in this case is really just a placeholder for something wonky, like: “standardized and easily upgradeable ERP systems on a platform you can easily extend and integrate via open standards.” In other words, “modern” is more about data accessibility and, yes, scalability than where the software lives. However, custom coding the heck out of on-premise systems looms as a catastrophic temptation.


Office 365 outage: Sign-in issues blight users across Europe and the US


Pete Banham, cyber resilience expert at Microsoft-focused email management company Mimecast, said enterprises need to ensure resilience is built into their Office 365 deployments to ensure their businesses can keep operating during downtime incidents. “IT teams and frustrated users struggled to remain productive [on Thursday] as they were unable to log in and use Office 365 services,” he said. “Due to operational dependency on the Microsoft environment, businesses are putting themselves at risk of being affected by commonplace outages such as this. “Anyone outsourcing a critical communication service like cloud or hosted email must consider a cyber-resilience strategy that assures the ability to recover and continue with business as usual.” Apart from fielding queries about yesterday’s outage, the Microsoft Office 365 Twitter account was also having to respond to questions about a recurring subscription activation issue that Mac users of the software have been encountering for a few weeks.


Women in Tech 2018: What the Statistics Tell Us


PwC recently looked at the role of women in tech in the UK. In STEM fields, women accounted for only 15 percent of employees. More distressingly, there are few signs that this number will rise without extra action, as only 15.8 percent of undergraduates in STEM fields are women. Leadership examples can be key toward encouraging more participation among women, yet only five percent of leadership positions in STEM fields are held by women. In PwC’s report “The Female Millennial — The New Era of Talent,” researchers found that young women want to work with employers with a strong history of inclusion, diversity, and equality. Many women see the low number of women in tech and choose to enter other fields. The PwC reports highlights the problems these disparities create for UK companies. Two-thirds of CEOs in the UK claim to have difficulty hiring people with digital skills, a numbers that significantly exceeds the 43 percent of CEOs who claimed the same in the US and the 24 percent of CEOs in China.


How the Boston Children’s Hospital Is Innovating on Top of an Open Cloud


“The open cloud model is really advantageous for companies to innovate at the infrastructure level, but even to manage it,” says Krieger, who has seen services like those of Boston Children’s Hospital’s radiology programs built on top of it. “There’s a huge cost in running experiments. When you’re a not-for-profit university, it’s even greater, with a huge capital investment to start anything on a cloud at sufficient scale.” Krieger contends that it’s always cheaper for universities — at least in the United States — to purchase their own equipment than to rent servers on the private cloud. But he said that “In the end the success will only come from something like the open cloud, if industry participates in it.” Krieger says only an aggregate of all these universities could inject the necessary capital investment to start out a cloud at sufficient scale to allow for experimentation on top of it. Plus, he says that universities have a long history of standing up large-scale computational infrastructure long before today’s public clouds.


Should Staff Ever Use Personal Devices to Access Patient Data?

Should Staff Ever Use Personal Devices to Access Patient Data?
HIPAA violation or not, is it ever a good idea to allow healthcare employees to use their personal smartphones to access patient records? What about during a crisis situation? "These are really tricky issues," says privacy attorney Kirk Nahra of the law firm Wiley Rein. "You have to think about two paths on these questions - how is this situation [involving employee smartphone access to patient records] handled normally, and what - if anything - can be done differently in an emergency situation? Both questions essentially involve thorough thinking as part of an overall risk assessment process." Companies of all kinds - in healthcare and otherwise - have to figure out how to manage the fact that data can be transmitted to mobile devices, whether personal or employer based, Nahra says. "What a company allows and what it does not allow - and how it 'prevents' what it doesn't allow - is a critical component of any risk assessment today." Companies have to develop a strategy that balances appropriate risks as well as business needs, the attorney adds.


Software Quality Is a Competitive Differentiator

The digital world is creating intriguing challenges related to software quality. These extend beyond the sheer volume of code that’s required to run systems. For instance, UI/UX has emerged front stage center — particularly as apps have proliferated. Maturing technologies, such as augmented reality and virtual reality, have introduced new challenges. The takeaway? It's no longer acceptable to view UI/UX testing as a traditional, commoditized function — a quality experience is paramount. There are other challenges, too. As the IoT matures and grows, there's a need for innovation in testing. The variety and number of edge devices is exploding, and all of this introduces enormous QA challenges. Ensuring that software performs adequately and meets user requirements is critical. The need for service level agreements between service providers and consumers has never been more important. Artificial intelligence changes the testing landscape as well. It can take over some human roles.


The Consumerization of Enterprise Architecture: Everyone’s an Architect!

In the longer term, we are seeing a more profound shift in the role of architecture: positioning enterprise architecture not as a high-level, top-down operating discipline, but as the connective fabric between different types of change. Within the enterprise, this means that everyone will have a role in the development and evolution of the architecture in some way. Essentially, everyone becomes an architect!  Of course that doesn’t mean that every employee should go on a TOGAF course. Rather, you should provide everyone with the instruments that let them see what the options and effects of local or global changes might be and act accordingly. All these changes can then align with the shared enterprise vision and work together in concert, ranging from the result of a local process improvement or the priority of some agile user story to the impact of a merger or the effect of new regulatory requirements on your business model.


What Harry Potter Teaches us about Constant Vigilance and Insider Threats


“Constant vigilance” is sage advice for businesses too. With the threat of malicious insiders, undetected attackers moving around a network and other risks to mitigate, there is no “one-and-done” solution in security. Industry research such as the 2018 Verizon Data Breach Investigations Report (DBIR) helps the collective community keep an eye on trends and glean insights from lessons learned to get ahead of potential vulnerabilities before they become problems. A few key trends identified in the report caught my eye. While the report indicates that 78 percent of people didn’t click on a single phishing link all year (which is promising news), phishing and pretexting remain popular attack methods. Attackers only need one employee to click a link and open the door for the attacker to enter. Once an attacker has stolen credentials, they can manoeuvre within the network, escalating levels of privilege until they have the access they need to wreak the havoc they intend. The report’s emphasis on education—making sure that employees are trained to identify and report social attacks such as phishing—is one important line of defence. Knowing what to look for is half the battle.


Why Your Approach to Cybersecurity Needs to Be Proactive Rather Than Reactive

Why Your Approach to Cybersecurity Needs to Be Proactive Rather Than Reactive
One of the major challenges facing businesses is the increasing sophistication of hackers. Ever-evolving hacking tactics and techniques, as well as more readily available hacking tools, has made it possible for cyber criminals to circumvent traditional defenses such as firewalls and anti-virus software. This leads to a further problem where attacks are becoming harder to detect. In fact, it is common for businesses to be breached without even knowing it. According to the Ponemon Institute, it takes an average of 191 days for a business to detect that it has been hacked. To counteract these problems, it is important to gain visibility of what activity is happening across networks and endpoints in order to be able to detect malicious activity in its infancy before it spreads. You need to assume that your business will be breached at some point and have appropriate monitoring controls and procedures in place to mitigate the risks.


Security architecture for the mobile ad hoc networks

Security in mobile ad hoc networks (MANETs) has been an actively researched topic for the several years. As opposed to wired networks, MANETs have dynamic topology, limited resources, limited bandwidth and are usually deployed in emergency scenarios outside, where landscape plays important role. MANETs are susceptible to insider and outsider attacks and bring new security challenges which were not present in the wired networks. The most important difference is that every node in MANET acts as a router and routes traffic throughout the network. Compromising one node can hugely affect network performance. In this paper, we present our security architecture for MANETs which secures important aspects of the network. We bring trust model into the network and nodes are secured by different mechanisms tailored specifically for the use in distributed environment.



Quote for the day:


"The key to being a good manager is keeping the people who hate me away from those who are still undecided." -- Casey Stengel