Showing posts with label kanban. Show all posts
Showing posts with label kanban. Show all posts

Daily Tech Digest - July 01, 2023

CERT-In cyber security norms bar use of Anydesk, Teamviewer by govt dept

Cyber security watchdog CERTin has barred the use of remote desktop softwares like Anydesk and Teamviewer in the government department under new security guidelines released on Friday. The guidelines prescribe government departments use virtual private networks (VPN) for accessing network resources from remote locations and enable multi-factor authentication (MFA) for VPN accounts. "Ensure to block access to any remote desktop applications, such as Anydesk, Teamviewer, Ammyy admin etc," Guidelines on Information Security Practices for Government Entities said. CERT-In (Indian Computer Emergency Response Team ) said the purpose of these guidelines is to establish a prioritised baseline for cyber security measures and controls within government organisations and their associated organisations. Minister of State for Electronics and IT Rajeev Chandrasekhar in an official statement said the government has taken several initiatives to ensure an open, safe and trusted and accountable digital space.


Navigating Product Owner Accountability in Scrum: Debunking Myths and Overcoming Anti-Patterns

In a misguided attempt to ‘help’ Product Owners with their important responsibilities, some organizations establish two Product Owners for a single Product. However, while this may seem, at first, to be helpful, this actually causes a lot of problems for both Product Owners involved. When multiple Product Owners exist, conflicting ideas and visions may arise, diluting the product's direction and impeding progress. ... Instead, the Product Owner can delegate tasks such as creating Product Backlog items, maintaining the roadmap, or gathering metrics to Developers on the Scrum team. However, it is important to note that while the Product Owner may delegate as needed, the Product Owner ultimately remains accountable for items in the Product Backlog as well as the product forecast or roadmap, thus ensuring that there is a single, unifying vision and goal for the product and that the Product Backlog is in alignment with that vision. If the Product Owner is delegating the creation of Product Backlog items to Developers, what does that mean? 


Cisco firewall upgrade boosts visibility into encrypted traffic

“What our competitors are saying is ‘just decrypt everything.’ But we know in the real world, customers refrain from doing that due to data privacy concerns and to meet legal/compliance requirements. Furthermore, decrypting and re-encrypting data requires technical prowess not everyone has, increases the attack surface, and also causes severe performance challenges,” Miles said. EVE works by extracting two primary types of data features from the initial packet of a network connection, according to a blog written by Blake Anderson, a software engineer in Cisco’s advanced security research group. First, information about the client is represented by the Network Protocol Fingerprint (NPF), which extracts sequences of bytes from the initial packet and is indicative of the process, library, and/or operating system that initiated the connection. Second, it extracts information about the server such as its IP address, port, and domain name (for example a TLS server_name or HTTP Host). 


Scrum vs. Kanban vs. Lean: Choosing Your Path in Agile Development

While Scrum is commonly associated with software development teams, its principles and lessons have broad applicability across various domains of teamwork. This versatility is one of the key factors contributing to the widespread popularity of Scrum. Scrum is founded upon the concept of time-boxed iterations called sprints, which are designed to enhance team efficiency within cyclical development cycles. ... Kanban is well-suited for organizations seeking to embrace the benefits of agility while minimizing drastic workflow changes. It is particularly suitable for projects where priorities frequently shift, and ad hoc tasks can arise anytime. Kanban is a flexible methodology that can be applied to various domains and teams beyond software development. ... Lean methodology strongly emphasizes market validation and creating successful products that provide value to users. It is particularly well-suited for new product development teams or startups operating in emerging niches where a finished product may not yet exist, and resources are limited.


3 Ways to Build a More Skilled Cybersecurity Workforce

In addition to insights around highly sought-after skill sets and job titles, OECD's report also reveals that demand for cybersecurity professionals has spread beyond the confines of major urban centers. It calls for a more decentralized workforce to meet demand in underserved areas. ... If companies are to close the skills gap and meet the current demand for cybersecurity workers, they will need to broaden their horizons to account for more nontraditional cybersecurity career paths. In doing so, they will enhance the industry with a broader range of unique experiences and life skills. Recruiting more diverse candidates also allows companies to approach security challenges from different angles and identify solutions that may not have been considered otherwise. When a workforce is as diverse as the cybersecurity threats an organization faces, it can pull from a broader range of professional and personal experiences to more effectively and inclusively protect themselves and their end users.


AI's Teachable Moment: How ChatGPT Is Transforming the Classroom

"Teachers could say, 'Hey my students are really interested in TikTok,' then feed that to the AI," says Liu. "An AI could come up with three analogies related to TikTok that connect students to their needs and interests." Liu believes we absolutely need to acknowledge the immediate threats surrounding AI and its initial impact on teachers, particularly around skills assessments and cheating. One approach he takes is to speak openly with students and acknowledge that AI is the new thing and that we're all learning about it – what it can do, where it might lead. The more open conversations educators have with students, he says, the better. In the near term, students are going to cheat. That's impossible to avoid. YouTube and TikTok are bulging at the seams with tricks to help students avoid plagiarism trackers. In the medium term, Liu believes, we need to reevaluate what it means to grade students. Does that mean allowing students to use AI in assessments? Or changing how to teach topics? Liu isn't 100% sure.


Top 5 Benefits Of Blockchain Technology

Transparency within the Blockchain ecosystem refers to the open visibility of transactions, enabling all participants to validate and verify the recorded data. Unlike traditional systems that rely on centralized authorities, Blockchain operates on a decentralized network, where each transaction is recorded on a public ledger known as the Blockchain. ... Immutability is a cornerstone of Blockchain technology. It guarantees that once a transaction is recorded on the Blockchain, it becomes virtually impossible to alter or tamper with the data. This is achieved through a combination of cryptographic techniques and consensus mechanisms. Blockchain achieves data immutability by using cryptographic hashing. Each transaction is assigned a unique cryptographic hash, which is essentially a digital fingerprint. This hash is created by applying complex mathematical algorithms to the transaction data, resulting in a fixed-length string of characters. Furthermore, Blockchain relies on consensus mechanisms such as Proof of Work (PoW) or Proof of Stake (PoS) to validate and verify transactions.


Technical Debt tracking supports projects to “do it right”

For decades, there have been logs of outstanding bugs found in testing but not corrected before the project is implemented. The term technical debt adds the concept that there are consequences to those decisions, and that there are strong reasons to prioritize the follow-up to fix things and clear that list. Most of us are aware of workarounds that were left in place permanently and eventually cost too much. We may have seen a system with poor performance that slowed the work of key workers and/or was missing functionality that impacted the customer experience. All of these are important reasons that technical debt should be cleared up. There are other reasons too. Generally, people do not purposefully create poor designs or code with bugs. ... One of the interesting concepts that has been offered by Martin Fowler is the Technical Debt Quadrant that talks about the prudent but inadvertent technical debt that is created as we learn during a project and realize how the project should have been done.


Successful digital transformation requires simplistic thinking

While organizations are aggressively pursuing transformation goals, Chaudhry warned that antiquated mindsets and a range of internal factors can seriously inhibit innovation and prevent businesses from achieving their goals. Most notable among these is a complacent culture among some IT leaders who are stuck in a loop of traditional, outdated practices. “IT plays the most important role in driving transformation. You play the most important role, but you also need to act fast to drive change,” he said. “You can’t sit back and say ‘this is how things have been done for the last 30 years, so let’s keep doing so’.” ... Inertia, as he puts it, is a powerful inhibitor that locks IT leaders and organizations into an outdated mindset which prevents them from embracing change. “Inertia is powerful, and it holds you back because we are comfortable with what we’ve been doing for the last 10, 20 years or so,” he said. Research has often identified inertia as a common inhibitor in digital transformation, whereby teams are reluctant - or unwilling - to accept change.


Strategies to drive the Data Mesh cultural transformation

It’s important to have consistent and clear communication to ensure that everyone understands the reasons and the effects of change. Leaders must communicate the vision and benefits of Data Mesh. They also need to guide on how the new ways of working are going to be adopted through well-defined structures, roles and responsibilities for the new data product teams. To ensure data product ownership and accountability, defining clear KPIs and metrics for each data product team to measure success and track progress is critical. ... Rather than trying to adopt Data Mesh all at once, organizations can start with small pilot projects and gradually expand. This approach can help understand how processes defined in vitro work in real life. It also comes with lessons learned which help followers avoid the initial mistakes. ... This ensures that everyone in the organizationunderstands the new concepts and ways of working. It could include training sessions and coaching on Data Mesh, product thinking, design, user research, agile methodologies, cross-functional team collaboration, and data product ownership.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - July 26, 2022

Don’t get too emotional about emotion-reading AI

Unfortunately, the “science” of emotion detection is still something of a pseudoscience. The practical trouble with emotion detection AI, sometimes called affective computing, is simple: people aren’t so easy to read. Is that smile the result of happiness or embarrassment? Does that frown come from a deep inner feeling, or is it made ironically or in jest. Relying on AI to detect the emotional state of others can easily result in a false understanding. When applied to consequential tasks, like hiring or law enforcement, the AI can do more harm than good. It’s also true that people routinely mask their emotional state, especially in in business and sales meetings. AI can detect facial expression, but not the thoughts and feelings behind them. Business people smile and nod and empathetically frown because it’s appropriate in social interactions, not because they are revealing their true feelings. Conversely, people might dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, the knowledge that emotion AI is being applied creates a perverse incentive to game the technology.


How AI and decision intelligence are changing the way we work

Technology can also provide a simple yet powerful AI tool for employees to use during their day-to-day activities. They can capture lessons learned as they work in real time, and adjust their actions when a corrective action is needed, also in real time. Throughout this process, AI defines actionable takeaways, shares insights and offers concise lessons learned (suggesting corrective actions, for example), all of which can boost the entire team’s performance. Since AI turns the data collected from daily work into actionable lessons learned, every team member can contribute to and draw on their team’s collective knowledge — and the entire company’s collective knowledge as well. The technology prompts them to capture their work, and it “knows” when a team member should see information relevant to their current task. AI ensures everyone has the right data at the right time, exactly when they need it. In this vision of a data-driven environment, access to data liberates and empowers employees to pursue new ideas, Harvard Business Review writes.


The emergence of multi-cloud networking software

Contrary to general perception, Hielscher argues that many enterprises do not voluntarily choose to operate within a multi-cloud environment. In many cases, the environment is thrust upon them through a merger, acquisition, or an isolated departmental choice that preceded a decision to consolidate architectures. "This results in organizational gaps, skill-set gaps, and contractual and spending overlaps," he explains. "As with any IT strategy, the first step is to establish which goals are to be addressed and the timeframes to address them in." Potential adopters should be prepared to spend both time and money when evaluating and comparing MCNS products. "For example, organizations should plan costs associated with staffing a team of engineers to see them through the evaluation process," Howell says. While virtually all large cloud-focused enterprises, and many smaller organizations, can benefit from the right MCNS, it's important to keep an eye on service and the bottom line. "Benefits to the enterprise must be greater than the cost of the solution," Howell warns.


Software Methodologies — Waterfall vs Agile vs DevOps

Software development projects that are clearly defined, predictable, and unlikely to undergo considerable change are best handled using the waterfall method. Typically, smaller, simpler undertakings fall under this category. Waterfall projects don't incorporate feedback during the development cycle, is rigid in their process definition, and offer little to no output variability. Agile methods are built on incremental, iterative development that promptly produces a marketable business product. The product is broken down into smaller pieces throughout incremental development, and each piece is built, tested, and modified. Agile initiatives don't begin with thorough definitions in place. They rely on ongoing feedback to guide their progress. In Agile development, DevOps is all about merging teams and automation. Agile development is adaptable to both traditional and DevOps cultures. In contrast to a typical dev-QA-ops organization, developers do not throw code over the wall in DevOps. In a DevOps setup, the team is in charge of overseeing the entire procedure.


Why you need to protect abandoned digital assets

The dangers posed by these abandoned assets are multifarious. Local digital assets can be usurped and used for malicious purposes, such as identity theft and credit card fraud. Not only does this leave organisations open to significant fines for breaches of data protection laws, there is the associated reputational harm caused by these incidents. “The risk depends what the connection is pointing to and what authentication or security measures have been put in place,” says Nahmias. “Security teams tend to be more lenient about connections to internal resources than they are about connections to external ones.” The distributed nature of modern enterprise means that networks are no longer spiders webs, but a complex mesh. While this is a far more robust form of network connectivity, there are also far more connections that need to be managed. As such, there is a potential risk of network connections from abandoned assets still being active, essentially permitting access to the rest of the corporate network. In many ways, this is a far greater risk to the organisation, as malicious actors could potentially obtain confidential information through these unsecured connections.


How the cybersecurity skills gap threatens your business

The deficit in skilled cybersecurity personnel is now directly affecting businesses’ ability to remain secure. The World Economic Forum has stated that 60 per cent would “find it challenging to respond to a cybersecurity incident owing to the shortage of skills within their team” and industry body ISACA found that 69 per cent of those businesses that have suffered a cyber attack in the past year were somewhat or significantly understaffed. The impacts can be devastating. Accreditation body ISC(2)’s Cybersecurity Workforce Study found that staff shortages were leading to misconfigured systems, tardy patching of systems, lack of oversight, insufficient risk assessment, lack of threat awareness and rushed deployments. With these shortages now jeopardising businesses’ ability to function, the hiring function is under significant pressure to up its game. To make matters worse, these shortages are expected to intensify. Last year the Department for Culture, Media and Sport (DCMS) predicted there would be an annual shortfall of 10,000 new entrants into the cybersecurity market but in its latest report, released in May, that was revised to 14,000 every year. 


Kanban vs Scrum: Differences

Kanban is a project management method that helps you visualize the project status. Using it, you can readily visualize which tasks have been completed, which are currently in progress, and which tasks are still to be started. The primary aim of this method is to find out the potential roadblocks and resolve them ASAP while continuing to work on the project at an optimum speed. Besides ensuring time quality, Kanban ensures all team members can see the project and task status at any time. Thus, they can have a clear idea about the risks and complexity of the project and manage their time accordingly. However, the Kanban board involves minimal communication. ... Scrum is a popular agile method ideal for teams who need to deliver the product in the quickest possible time. This involves repeated testing and review of the product. It focuses on the continuous progress of the product by prioritizing teamwork. With the help of Scrum, product development teams can become more agile and decisive while becoming responsive to surprising and sudden changes. Being a highly-transparent process, it enables teams and organizations to evaluate projects better as it involves more practicality and fewer predictions.


8 top SBOM tools to consider

Indeed, SBOMs are no longer just a good idea; they're a federal mandate. According to President Joe Biden's July 12, 2021, Executive Order on Improving the Nation’s Cybersecurity, they're a requirement. The order defines an SBOM as "a formal record containing the details and supply chain relationships of various components used in building software." It's an especially important issue with open-source software, since "software developers and vendors often create products by assembling existing open-source and commercial software components." Is that true? Oh yes. We all know that open-source software is used everywhere for everything. But did you know that managed open-source company Tidelift counts 92% of applications containing open-source components. In fact, the average modern program comprises 70% open-source software. Clearly, something needs doing. The answer, according to the Linux Foundation, Open Source Security Foundation (OpenSSF), and OpenChain are SBOMs. Stephen Hendrick, the Linux Foundation's vice president of research, defines SBOMs as "formal and machine-readable metadata that uniquely identifies a software package and its contents; it may include other information about its contents, including copyrights and license data.


The race to build a social media platform on the blockchain

DSCVR, a blockchain-based social network built on Dfinity’s Internet Computer protocol, has entered the race to build a scalable DeSo platform with $9 million in seed funding led by Polychain Capital. Other participants in the round include Upfront Ventures, Tomahawk VC, Fyrfly Venture Partners, Shima Capital and Bertelsmann Digital Media Investments (BDMI), according to the company. It’s a competitive space with plenty of startups and large companies racing to build a network that provides utility for its users. Earlier this month, ex-Coinbase employee Dan Romero secured $30 million led by a16z to develop Farcaster, a DeSo protocol that allows users to move their social identity across different apps. TechCrunch covered another seed-stage startup, Primitives, that raised a $4 million round in May for its own Solana-based DeSo network. Big tech is in the game, too — Twitter funds an offshoot of its service called BlueSky, an open-source DeSo project founded in 2019 that hasn’t gone live but is experimenting publicly with its development process.


7 ways to keep remote and hybrid teams connected

Marko Gargenta, CEO and founder of PlusPlus, a maker of internal training software that he founded after creating Twitter’s Twitter University, uses that idea to create company culture. It started at Twitter because he saw that some people had deep knowledge in topics that would benefit others. He started tapping them to give workshops and share that knowledge. Those 30-minute workshops were informal, in person, and wildly popular. “One in five engineers were regularly teaching classes,” he says. Those continued when the world went remote, but they shifted to canned videos. Those did not have the same impact. “People wanted human connection,” he says. “So, we started dialing the pendulum back toward live connection. Now they happen over Zoom but are very synchronous.” That has worked well. “If you look at ancient Greece,” says Gargenta, “Plato started The Academy. It was the place where people chasing ideas or mastery congregated, which created a sense of a culture. This pattern of people chasing mastery creates community. It’s what shaped ancient Greece, and all sorts of innovations came out of that.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - February 16, 2022

Metaverse: Making it a universe for all

With a lot of speculation and little clarity about how it will work, technology companies and governments are only starting to invest in the concept. However, these investments and innovations continue to be riddled with the same concerns that various social scientists and philosophers have been asking of the promises made by the internet and social media. Set off to “democratise the good and disrupt the bad”, the internet has actively helped in the creation of international monopolies holding powers more than governments of nations. Although it has brought immense information to our fingertips, gatekeepers of knowledge still continue to profit by encouraging exclusion, our social relationships have taken a back-seat as we become increasingly affected with our identities online, and many vulnerable groups are left behind due to infrastructural inaccessibility to phones, laptops, computers, and the internet. As the world speculates to step in the metaverse in the next decade, these same questions come to the fore.


How to manage software developers without micromanaging

If software developers detest micromanaging, many have a stronger contempt for yearly performance reviews. Developers target real-time performance objectives and aim to improve velocity, code deployment frequency, cycle times, and other key performance indicators. Scrum teams discuss their performance at the end of every sprint, so the feedback from yearly and quarterly performance reviews can seem superfluous or irrelevant. But there’s also the practical reality that organizations require methods to recognize whether agile teams and software developers meet or exceed performance, development, and business objectives. How can managers get what they need without making developers miserable? What follows are seven recommended practices that align with principles in agile, scrum, devops, and the software development lifecycle and that could be applied to reviewing software developers. I don’t write them as SMART goals, but leaders should adopt the relevant ones as such based on the organization’s agile ways of working and business objectives.


Neuralink: Elon Musk's brain implant firm refutes animal abuse claims

The complaint centers on the care provided to test monkeys during and after implant and removal procedures at UC Davis. PCRM alleges that Neuralink and UC Davis staff failed to provide monkeys with adequate veterinary care, used an unapproved substance called BioGlue that killed monkeys in the experiments, and euthanized several monkeys. Details of the monkies' conditions were revealed in documents released by the university after PCRM filed a public records lawsuit in 2021. Neuralink says that during the 2.5 years at UC Davis, its tests were only conducted on cadavers or "terminal procedures", which involved the "humane euthanasia of an anesthetized animal at the completion of the surgery." "The initial work from these procedures allowed us to develop our novel surgical and robot procedures, establishing safer protocols for subsequent survival surgeries," the company says. During survival studies, Neuralink says two animals were euthanized at planned dates and six animals were euthanized at the medical advice of UC Davis veterinary staff.


Creating a more sustainable IT department

Technical debt requires more infrastructure, which results in more and more carbon emissions. In addition, technical debt requires more manual processes. For example, if we have three different ERP systems that are not integrated, it takes more manual effort just to extract the reports for financial reporting, which results in more paperwork. Technical debt accumulation results in much more latency in processes, much more manual effort, much more infrastructure -- and all that adds to our carbon footprint. Because of that we in the technology group also have an eye toward not accumulating net new technical debt. We do that, in part, by looking at how we manage our vendors so we don't end up buying similar products [for different areas of the business] as well as how we introduce new technologies into our ecosystem to avoid duplication and to ensure they can scale across the organization. ... Continued hybrid and flexible working options for our employees also helps us support reduced emissions because employees don't have to commute. We have also implemented our own business systems management platform that facilitates hot-desking in the workplace.


How Dutch hackers are working to make the internet safe

However happy the foundation was with the donation, it did lead to a slight panic. “It was just before the turn of the year, the term ‘wealth tax’ came up, so we hastily set up a fund and found an administration office to handle the annual accounts,” says Van ’t Hof. This accelerated the professionalisation of the foundation. A new structure was set up, with DIVD as the fundamental institute. “Victor Gevers was the chairman of that before, but because we wanted a different structure, that stopped and we had to look for a director. Surprisingly, everyone pointed in my direction. I took on that task,” says Van ’t Hof. Under the flag of the DIVD Institute is the fund that is meant to bundle all subsidies, donations and other money flows. “From that fund, we can finance projects that contribute to a safer internet,” explains the DIVD director. To give shape to the global ambition, a separate foundation was also set up, CSIRT.global, of which Eward Driehuis is in charge. “That foundation will set up departments in other countries so that volunteer hackers there can also help to scan and report,” says Van ’t Hof.


How to Run a Cassandra Operation in Docker

Containers thrive in a world of modern applications that demand faster delivery, better portability and seamless scalability. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production. The steam behind this growing trend is none other than Docker. Docker is an open source containerization platform that lets developers package applications into containers that include everything they need to run in different environments. For enterprises, however, it can be tricky to manage individual Docker containers at scale, giving way to the popular container orchestration platform: Kubernetes (K8s). In short, Kubernetes makes it easy to deploy, manage and scale containers — and is the dominant orchestration platform used in enterprises today. This makes learning Kubernetes a must for every budding application developer; but first, you need to understand containers and Docker. In this Cassandra Operations in Docker workshop, you’ll become familiar with Docker and learn how to deploy a cloud native application in containers.


MIT Develops New Programming Language for High-Performance Computers

The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has long been concerned with the optimization of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimizations. This represents their first collaboration. Bernstein and Liu were brought into the enterprise last year, and ATL is the result. It now stands as the first, and so far the only, tensor language with formally verified optimizations. Liu cautions, however, that ATL is still just a prototype — albeit a promising one — that’s been tested on a number of small programs. “One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” she says. In the past, optimizations of these programs have typically been done by hand, on a much more ad hoc basis, which often involves trial and error, and sometimes a good deal of error. 


Goal-Driven Kanban

Adopting goal-driven Kanban was done in one team. Initially, the team used Scrum. Due to the nature of the business, the team had a significant percentage of tasks that were dependent on other teams and various stakeholders and thus the team continuously was not able to complete them in time. Naturally, this caused frustration, and the team decided to switch to Kanban. This cured the issue but over time, the team members started feeling that they work as a “feature factory”. They were missing challenges. Thus Goal-Driven Kanban was born. After receiving management support and agreement with Product Management, the team chose their first goal. It immediately revealed the need to re-plan other features and tasks since the team had to re-focus on the agreed goal. It required rough estimation of the goal, understanding of the team capacity and further agreements with stakeholders. While working on the goal the team had to tackle various challenges, because the bar was high and the team had to all work together doing overall design and development.


A product manager’s guide to web3

Many PMs develop skills like “communication” and “influence” at larger organizations, or even startups where they need to work closely with founders and rally overworked teams. This makes sense because persuasion and coordination have been core to the web2 PM job. Those skills don’t matter as much here. Web3 PM is more focused on execution and community—like signing a big new protocol partner or getting tons of anon users via Twitter. In web2 I was afraid to tweet much, for professional consequences. Now I’d be untrustworthy if I didn’t tweet a lot. Making a viral meme is more important than writing a good email. That is because getting positive attention in the frenetic world of web3 is more valuable than “alignment.” ... Web3 moves too quickly for pontification; new protocols launch daily and DeFi concepts like bonding curves and OHM forks are being tested in real time, so visions and strategies quickly become outdated. This may change over time as the space matures and product vision becomes more of a competitive advantage.


NIST releases software, IoT, and consumer cybersecurity labeling guidance

The order asked NIST to produce guidance for federal agency staff who have software procurement-related responsibilities and is intended to help federal agency staff know what information to request from software producers regarding their secure software development practices. The new NIST document spells out minimum recommendations for federal agencies to follow as they acquire software or a product containing software. The order also directed NIST to define actions or outcomes for software producers, such as commercial-off-the-shelf (COTS) product vendors, government-off-the-shelf software developers, contractors, and other custom software developers. ... NIST notes that its guidance is limited to federal agency procurement of software, which includes firmware, operating systems, applications, and application services, as well as products containing software. Software developed by federal agencies is out of scope, as is open-source software freely and directly obtained by federal agencies.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - January 19, 2021

Superintelligent AI May Be Impossible to Control; That's the Good News

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm. However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures. “Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.” We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains. On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.


Rethinking Active Directory security

A change made within on-premises Active Directory by an attacker can provide access to much more than just local resources. An attacker, can for example, make a compromised on-premises user account a member of a Sales group in Active Directory. This group likely would provide access to on-premises systems, applications, and critical data. But because Active Directory often federates with cloud applications via external IDP (e.g., Azure AD), it’s reasonable to assume that this same change in membership could allow access to a cloud-based CRM environment (like Salesforce), customer data (hopefully contained to the breached account, but more likely to the entire organizational data) and other resources. In many cyberattacks it’s more complex than the example above, where it’s necessary to gain elevated privileges via one account only to compromise a second, third, and so on, each time moving from system to system, or – in the case of a hybrid environment – from on-premises to cloud, leveraging access to on-premises Active Directory to specifically target accounts known to have access in the cloud.


The Great Compromise In AI’s Buy Vs Build Dilemma

Building AI in-house presents a variety of benefits. When done right, a built approach can lead to a stable, production-grade AI solution that is perfectly tailored to the specific needs and requirements of an industry or company. Digital natives have shown the impact of building AI from scratch. IBM is a prominent example of a business that has launched successful in-house AI into production. A recent report found IBM’s Watson Assistant AI paid itself back in just 6 months, with a three-year ROI of 337%. For digital adopters however, successfully building and implementing an AI solution in house is easier said than done without access to sizable capital and infrastructure. “When building an AI solution in-house, companies typically hire a team without significantly investing in the foundational elements that are required to stabilize AI in complex and dynamic environments,” suggests Nurit Cohen Inger, VP of Products at AI company BeyondMinds. “This approach, unfortunately, has typically meant a long and costly process to reach ROI positivity or in the worst case, never achieving production. Before developing AI solutions, businesses must heavily invest in solving the barriers that hold them back from turning proof of concepts into successful solutions in production.”


Training from the Back of the Room and Systems Thinking in Kanban Workshops

It’s very tempting to put everything you know on a training agenda, especially when you, as a trainer, feel that you have to know everything and constantly impress the learners. It’s always hard to chop workshop content into the bare minimum, especially when you have a lot of knowledge, experience, and fun stories to share. But if you are aiming for deep understanding and a lot of practice, less content translates into more value. Overloading groups with new information may lead to chaos during your class. They will struggle to understand which new tool or technique they should use first. In the end, they may just quit before they even start. ... Training From the Back of the Room (TBR) is a fresh approach to learning, training, presenting and facilitating that was developed by Sharon Bowman. It uses cognitive neuroscience and brain-based learning techniques to help learners to retain new information. TBR teaches you how to engage the five senses and keeps your learners active and engaged throughout the class. The concept is recognized internationally as one of the most effective frameworks for accelerated learning. It is a new way of teaching adults.


How COVID-19 accelerated a digital revolution in the insurance industry

The pandemic reminded us that we’re human. This experience has taught us compassion, grace, and the importance of both the health and wellbeing of ourselves and our families. COVID-19 has fundamentally reshaped the way we view protection products. In fact, two thirds (66%) of Americans say they now better understand life insurance’s value, with another quarter buying coverage for the first time. Awareness around the role of employers in providing access to these products has also increased. In a recent LIMRA study, one in four employees said they are more likely to sign up for certain benefits available through their employer. Along with this heightened awareness of our mortality and morbidity comes the realization that we thrive on human interaction. We can’t take a digital-only approach. Bringing emotion—positive emotion and empathy—to the experience and every interaction we have with customers will help us get farther, faster. As we continue to invest in technology across the insurance industry, we need to look for ways to make digital and human experiences work together for customers, employers, and financial professionals. Many of our customers tell us they don’t understand insurance products and they don’t know where to start educating themselves. 


7 Blindspots You Need to Uncover to Achieve Digital Banking Breakthrough

To explain the way that the “experience gap” might cause trouble, I'd like to share a real-life example. Several years ago a quite known and respectable Central European bank embarked on a voluminous digital transformation journey. The bank's application had a rating of 3.5 and was outdated. In order to digitalize, improve the bank's image and the competitive chances in the growing digital market, the management intended to urgently create and launch a modern looking banking application. Therefore, the initial design and development period was 6 months. Nevertheless, the bank spent three times as much time building the new application by themselves: 1 year and 8 months. This was a serious project not only in terms of time but also the budget invested. Judging by the scope of the project, the improvements made and the timeline, the overall costs could be estimated at around half a million. However, the result did not live up to expectations at all. After the new application was released it decreased to 2.4 from the previous 3.5 and has kept dropping even a year after its first release as it did not improve, but significantly worsened the customer experience.


Riding out the wave of disruption

Disruption is not necessarily the crisis it’s frequently considered to be for incumbents, the researchers stress. Two technologies can often coexist in the marketplace for a significant period. Thus, it’s important for incumbent companies not to overreact. They should target dual users and reexamine the factors that have led to the old technology sticking around for so long. Of course, the profit implications of cannibalization of the old technology and leapfrogging depend on which type of firm is trumpeting the new technology. New entrants will always stand to gain when they introduce a technology that takes off. But incumbents rolling out a successive technology will also gain if their competitors would have introduced it anyway or if the 2.0 version has a higher profit margin than the original. The authors write, “Leapfroggers are an opportunity loss for incumbents, but switchers are a real loss.” Regardless of the predictive model they use, marketers should strive to understand how the various consumer segments identified in this study will grow or shrink over time and use that information in their forecasts of early sales or market penetration of successive technologies.


Understanding the AI alignment problem

What’s worse is that machine learning models can’t tell right from wrong and make moral decisions. Whatever problem exists in a machine learning model’s training data will be reflected in the model’s behavior, often in nuanced and inconspicuous ways. For instance, in 2018, Amazon shut down a machine learning tool used in making hiring decisions because its decisions were biased against women. Obviously, none of the AI’s creators wanted the model to select candidates based on their gender. In this case, the model, which was trained on the company’s historical hiring data, reflected problems within Amazon itself. This is just one of the several cases where a machine learning model has picked up biases that existed in its training data and amplified them in its own unique ways. It is also a warning against trusting machine learning models that are trained on data we blindly collect from our own past behavior. “Modeling the world as it is is one thing. But as soon as you begin using that model, you are changing the world, in ways large and small. There is a broad assumption underlying many machine-learning models that the model itself will not change the reality it’s modeling. In almost all cases, this is false,” Christian writes.


Fixing the cracks in public sector digital infrastructure

First, there needs to be a government-wide, comprehensive digital skills strategy. One survey of industry professionals found that 40% of public sector organisations did not have the right skills to carry out digital transformation. Every member of the workforce needs to be able to perform basic tasks online. But to press forward with digital transformation, the government needs to champion digital leadership in the public sector – and that includes paying properly for those skills. The Government Digital Service recently advertised for a head of technology and architecture with a maximum salary of £70,887 a year. According to Google Jobs, typical pay for this type of work ranges from £65,000 to £180,000 in the private sector. This puts the public sector at a unique disadvantage and pay scales should be reviewed. ... Second, the Cabinet Office needs to address the gap between guidance and action on the ground. Out-of-date technology is widespread in some areas of the public sector, despite there being a large volume of information from central government on maintaining and updating digital infrastructure. Legacy IT has been holding digital public services back for years and will continue to do so unless there is a cross-government push to drive this forward.


Emotion Detection in Tech: It’s Complicated

Emotion detection would be a lot easier if humans expressed themselves in homogenous ways. However, cultural backgrounds and unique life experiences influence personal expression. Michelle Niedziela, VP of research and innovation at market research firm HCD Research, said advertisers and their agencies can get overly excited about the "happy" responses an ad drives when the response may have been a natural reflex. "If I smile at you, you innately smile back. So, one thing is are they really feeling happy or just projecting happy?" said Niedziela. "But also, how big does a smile have to be in order to be interpreted as happy?" Even cheap camera sensors are improving, but some of them may not be able to detect subtle nuances in facial geometry or provide the same degree of reliability among individuals who represent different races. Also, things that change an individual's appearance like hats, bangs or facial hair can negatively impact the accuracy of emotion sensing. "In my mind, the two biggest challenges are hardware quality and the models," said Capgemini's Simion. "You need to be very careful when you're talking about emotionality is the dataset you're going to use because if you're just going to call normal APIs from the cloud providers, that's not going to help much."



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - November 27, 2019

10 Predictions How AI Will Improve Cybersecurity In 2020

10 Predictions How AI Will Improve Cybersecurity In 2020
Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, observes that “Keeping machines up to date is an IT management job, but it's a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what's going on and what processes are running and what's consuming network bandwidth is an IT management problem, but it's a security outcome. I don't see these as distinct activities so much as seeing them as multiple facets of the same problem space, accelerating in 2020 as more enterprises choose greater resiliency to secure endpoints.” ... Josh Johnston, Director of AI at Kount, predicts that “the average consumer will realize that passwords are not providing enough account protection and that every account they have is vulnerable. Captcha won’t be reliable either, because while it can tell if someone is a bot, it can’t confirm that the person attempting to log in is the account holder. AI can recognize a returning user. AI will be key in protecting the entire customer journey, from account creation to account takeover, to a payment transaction. ...”


hero-image.jpg
Wolfram Language has limitations, and has been described by some users as better suited to solving a wide range of predetermined tasks, rather than being used to build software. It also seems there is still a way to go for Wolfram Language – it didn't, for example, feature in the IEEE's recent list of top programming languages. Wolfram has said that Wolfram Language is not just a language for telling computers what to do, but a way for both computers and humans to represent computational ways of thinking about things. Of late Wolfram has been more bold in how he talks about Wolfram Language, describing it as a "computational language" that could even help bridge the gulf between ourselves and future non-human intelligences, be they artificial intelligence (AI) or extraterrestrial. As esoteric a pursuit as it might seem, Wolfram believes the need for this lingua franca is timely, as machine-learning systems increasingly make decisions about our lives -- whether that's screening loan applications today or maybe even choosing whether to kill people tomorrow.


Tech jobs: These are the skills hiring managers are looking for now


CompTIA noted that the technology workforce, in particular, has been under the microscope for its lack of diversity. Diversity in tech staffing is likely to improve due to continuing pressure, the association said, but "fully diverse and inclusive environments still lie further in the future". A wide range of research and anecdotal examples proves that there's still much work to do in achieving equity, from data on wage gaps to the makeup of executive teams to ongoing reports of abusive behaviour, CompTIA said. Although 30% of companies feel that there has been significant improvement in the diversity of the tech workforce over the past two years, previous CompTIA research shows that "sentiment tends to skew more positive than reality on this topic." "The trend may be heading in the right direction, but the chasm was so wide that it will take significant time and intentional changes to close," said CompTIA, noting that there is a long list of potential actions that could improve the situation. Flexible work arrangements, including the physical environment, can create more opportunities and a more welcoming atmosphere, especially if there is a hard look at how the existing arrangements unintentionally create barriers, the association said.


AI Is The Link Between Big Data & Persons-Level Measurement

To highlight the shortcomings of big data from a measurement perspective, we conducted an analysis in the U.S. earlier this year that compared set-top box data with set-top box data that we calibrated with Nielsen panel data. The analysis found that the uncalibrated data is inherently biased and underrepresents minority audiences. That’s not to say, however, that big data has no value. Quite the opposite. But it does need to be grounded in a foundational truth set. That’s where our panels and artificial intelligence (AI) come into play. Our panel data—the key to persons-level measurement—is the perfect truth set for training big data. Through the application of AI, we use big data to dramatically broaden our measurement capabilities while preserving quality and representativeness. Today, AI is integral in our measurement methodologies. For example, it played a pivotal role in the development of our enhanced measurement capabilities for local TV markets, which combines the scale of big data (return path data {RPD} from TV sets) with fully representative in-market panel data.


GDPR Data Regulations & Commercial Fines


The public and private sector are both impacted, although government agencies have more leeway across GDPR in general due to requirements to retain and use data to deliver services to citizens. In terms of what best practice should be in dealing with a request, the advice from the UK’s Information Commissioner’s Office is that there should be a policy for recording all “subject access requests” and that based on Recital 59 of the GDPR, organisations “provide means for requests to be made electronically, especially where personal data are processed by electronic means.” This process will start with an access request form but when it comes to identity, the guidance is unclear. A number of organisations are asking for a similar set of documents that most banks require to open an account which includes a “proof of identity” such as a passport, photo driving license or birth certificate along with a “proof of address” such as a utility bill, bank statement or credit card statement. This requirement to verify from copies or scans of electronic documents is a major weakness in this process. 


Non-functional
Simply said, a non-functional requirement is a specification that describes the system’s operation capabilities and constraints that enhance its functionality. These may be speed, security, reliability, etc. We’ve already covered different types of software requirements, but this time we’ll focus on non-functional ones, and how to approach and document them. If you’ve ever dealt with non-functional requirements, you may know that different sources and guides use different terminology. For instance, the ISO/IEC 25000 standards framework defines non-functional requirements as system quality and software quality requirements. BABOK, one of the main knowledge sources for business analysts, suggests the term non-functional requirements (NFR), which is currently the most common definition. Nevertheless, these designations consider the same type of matter – the requirements that describe operational qualities rather than a behavior of the product. The list of them also varies depending on the source.


The Road to 2030 Must Be Circular


What gets exciting, is when you can find the perfect material match in someone else’s waste. Carbon fiber is a great example. Turns out computers use a similar grade carbon fiber as airplanes. So we reclaim aerospace material for Latitude, our commercial notebook line. To date, Dell has prevented more than 2 million pounds of carbon fiber from ending up in landfills. And in this case, the benefits go far beyond the environment. We’ve partnered with Carbon Conversions, a start-up based in South Carolina with a mission to reclaim and recycle carbon fiber. Carbon Conversions has redesigned and reengineered the papermaking process to produce carbon fiber non-woven fabrics, bringing new growth to an area historically impacted by overseas manufacturing. Finding more partners like Carbon Conversions will be important. It will also be important to increase our own recycling streams dramatically (i.e. you all have a role to play too). We must make it as easy as possible for you to recycle.


Bringing Business and IT Together, Part II: Organizational Alignment

COA is similar to other continuous improvement processes such as continuous quality improvement (CQI) and continuous process improvement (CPI). Just as CQI and CPI demand structure and metrics, so too does COA. Continuous improvement is evolutionary and incremental. It is manageable only when understood as a set of interconnected components that can be identified and measured. The COA Framework illustrated in Figure 1 provides the necessary structure. This three-dimensional structure associates the core elements of COA – those of organizational alignment and working relationships – with the activities of continuous improvement. The framework identifies the components that can be managed, measured, and modified to improve the overall alignment of business and technology organizations. ... Organization-to-organization relations are ideally structured and business-like. Conversely, person-to-person relationships are best when unstructured and friendly. Team-to-team relationships seek a balance between the two extremes.


VMware doubles up on Kubernetes play


Many of our large customers have Kubernetes clusters on vSphere, Amazon EC2 and sometimes bare metal. These are managed by different teams, making it difficult to manage and control everything. That was a problem we wanted to solve. Then comes the next question on how we can help customers build and deploy new applications. Historically, we’ve relied on Pivotal as a partner to help customers modernise their applications. While Pivotal Cloud Foundry is a great platform, Pivotal last year decided to use Kubernetes as the default runtime for their developer platform. Meanwhile, Spring Boot was becoming the de facto way by which people built microservices. So, we felt that by bringing Pivotal into the family, we could offer a very comprehensive solution to help customers build, run and manage their modern applications.


Using Kanban with Overbård to Manage Development of Red Hat JBoss EAP

Red Hat JBoss EAP (Enterprise Application Platform) has become a very complex product. As a result, planning EAP releases is also increasingly complicated. In one extreme case of the team working on the next major release while developing features for the previous minor release, the planning for that major release was ongoing for 14 months with the requirements constantly changing. However, spending more effort on planning didn't improve the end result; it didn't make us any smarter or more accurate. We'd rather spend more time doing stuff rather than talking about it. That was a major problem. In addition, there were cases in which requirements could be misunderstood or miscommunicated and we found that out late in the cycle. We had to find a way to collectively iterate over a requirement and make sure everyone understood what was to be done. In some cases we could go as far as implementing a proof-of-concept before we would be certain we fully understood the problem and the proposed solution.



Quote for the day:


"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik


Daily Tech Digest - September 01, 2019

Software Ate The World, Now AI Is Eating Software

AI Is Eating Software
The extent in which Andreessen’s cherished software companies are weaving AI into their products is however often limited. Instead, a new slew of start-ups now incorporates an infrastructure based around the above mentioned AI-facilitating processes from their very foundation.  Driven by an increase in efficiency, these new companies use AI to automate and optimize the very core processes of their business. As an example, no less than 148 start-ups are aiming to automate the very costly process of drug development in the pharmaceutical industry according to a recent update on BenchSci. Likewise, AI start-ups in the transportation sector create value by optimizing shipments, thus vastly reducing the amount of empty or idle transports. Also, the process of software development itself is affected. AI-powered automatic code completion and generation tools such as TabNine, TypeSQL and BAYOU, are being created and made ready to use.



The disruption effort began after Avast in March traced back a rise in stealthy cryptocurrency mining infections to variants of a worm called Retadup, written in both AutoIt and AutoHotkey scripts. Researchers began studying the command-and-control communications being used to control infected endpoints, or bots, says Jan Vojtesek, a malware researcher at Avast, in a research report. "After analyzing Retadup more closely, we found that while it is very prevalent, its C&C communication protocol is quite simple," he says. "We identified a design flaw in the C&C protocol that would have allowed us to remove the malware from its victims' computers had we taken over its C&C server." Avast alerted France's national cybercrime investigation team, C3N, that servers in France appeared to be hosting the majority of the command-and-control infrastructure for distributing and controlling the Retadup worm - in other words, self-replicating malware. Avast also shared a technique that it thought might allow authorities to neutralize existing infections.


Unlike some companies where departmental work groups are not always accessible to those outside those groups, Facebook employees can participate in any group. “Most of those groups are what we call open QA. What that means is that people outside of those groups can also see the information. And you’ll be surprised how this tackles a number of challenging problems as the company grows,” Nguyen said. For one, open work groups will help to prevent duplication of projects, since developers can see what other teams are doing, and avoid building the same things. In cases where duplicate projects are already being built, Nguyen would step in to bring the teams together in an open dialogue. “There were a few teams within infrastructure and Instagram that were building different technologies for logging of data,” Nguyen recalled. “One of the engineers at Instagram escalated [the issue] to me and I set up a meeting for them to work together.”


4 Cybersecurity Professionals That Can Benefit from Threat Intelligence

The first layer of defense that most organizations rely on is their own security operation center (SOC). Whether outsourced or in-house, security operations analysts need to possess a broad set of skills to be effective. This includes capabilities in log monitoring, penetration testing, incident response, access management, and more. Each one of these tasks requires a different group of systems and solutions to work well, which are usually not integrated. This means that SOCs often have to deal with unending alerts and big data that may not come with much context. Threat intelligence enriches alert management. It provides context to help SOCs know which alerts need to be prioritized. Some threat intelligence platforms readily offer this kind of automation using machine learning (ML) or similar technologies. Just like SOCs, incident response teams face the challenge of getting information that lacks context. They are also bombarded with numerous alerts from their security information and event management (SIEM) solutions and so are forced to choose which ones to prioritize.


Cloud Storage Is Expensive? Are You Doing it Right?


A common solution, adopted by a significant number of organizations now, is data repatriation. Bringing back data on premises (or a colocation service provider), and accessing it locally or from the cloud. Why not? At the end of the day, the bigger the infrastructure the lower the $/GB and, above all, no other fees to worry about. When thinking about petabytes, there are several ways to optimize and take advantage of which can lower the $/GB considerably: fat nodes with plenty of disks, multiple media tiers for performance and cold data, data footprint optimizations, and so on, all translating into low and predictable costs. At the same time, if this is not enough, or you want to keep a balance between CAPEX and OPEX, go hybrid. Most storage systems in the market allow to tier data to S3-compatible storage systems now, and I’m not talking only about object stores – NAS and block storage systems can do the same. I covered this topic extensively in this report but check with your storage vendor of choice and I’m sure they’ll have solutions to help out with this.


The First Artificial Memory Has Been Successfully Created and Implanted

Previous research had shown that it was possible to partially transfer memories from one rodent to another via reproducing the electrical activity associated with a specific memory in one mouse and jolting it into the brain of another mouse. This new experiment is different. This time the memory was created completely artificially from the ground up. This consisted of a few parts. First, they used a technique called optogenetics. This involves fiber optic cables surgically implanted into the olfactory region of the mice’s brain so that light can be used to turn on proteins associated with specific smells. To do that, the mice had to be genetically engineered to only produce the light-sensitive protein in the region associated with acetophenone—AKA the scent of cherry blossoms. Now they could artificially create the scent of cherry blossoms in the brain of a mouse. So we’re already into some wacky stuff, but don’t worry. It gets wackier.


Semi-supervised learning explained

Semi-supervised learning explained
Self-training uses a model’s own predictions on unlabeled data to add to the labeled data set. You essentially set some threshold for the confidence level of a prediction, often 0.5 or higher, above which you believe the prediction and add it to the labeled data set. You keep retraining the model until there are no more predictions that are confident. This begs the question of the actual model to be used for training. As in most machine learning, you probably want to try every reasonable candidate model in the hopes of finding one that works well. Self-training has had mixed success. The biggest flaw is that the model is unable to correct its own mistakes: one high-confidence (but wrong) prediction on, say, an outlier, can corrupt the whole model. Multi-view training trains different models on different views of the data, which may include different feature sets, different model architectures, or different subsets of the data. There are a number of multi-view training algorithms, but one of the best known is tri-training.


Sprint Reviews With Kanban

Kanban is sometimes thought of as a soft option because “flow” is misinterpreted as “whatever gets delivered gets delivered”. A team will start with what it is, realistically, doing now. There is no need to vamp Sprints. The odious Sprint Goal and the contrived forecast of work in the Sprint Backlog are dispensed with. It looks as if the team can no longer be held hostage to fortune. In Kanban there is no Great Lie to be fabricated about a planned Sprint outcome, and, it is assumed, there is no great commitment that can hang over team members’ heads like the Sword of Damocles. What possible use for a monstrous Sprint Review can there be? Instead, there ought to be a succession of mini-reviews with the Product Owner as each item is completed. Having mini-reviews can be useful and timely, and they are all very well. In truth, however, a professional Kanban team will not escape from making a serious commitment, nor would a team ever seek to do so. For one thing, its members will need to understand and define a commitment point in their workflow.


Hackers Hit Unpatched Pulse Secure and Fortinet SSL VPNs


Based on their count of recent publicly exposed common vulnerabilities and exposures in SSL VPNs, it appeared that Cisco equipment would be the riskiest to use. To test that hypothesis, the researchers began looking at SSL VPNs and found exploitable flaws in both Pulse Secure and Fortinet equipment. The researchers reported flaws to Fortinet on Dec. 11, 2018, and to Pulse Secure on March 22. ... In response, Fortinet released a security advisory on May 24 and updates to fix 10 flaws, some of which could be exploited to gain full, remote access to a device and the network it was meant to be protecting. In particular, it warned that one of the flaws, "a path traversal vulnerability in the FortiOS SSL VPN web portal" - CVE-2018-13379 - could be exploited to enable "an unauthenticated attacker to download FortiOS system files through specially crafted HTTP resource requests." Such FortiOS system files contain sensitive information, including passwords, meaning attackers could quickly give themselves a way to gain full access to an enterprise network.


How to bolster IAM strategies using automation


Litton argues that automation is also important for protecting critical data assets. “An example of this is when an employee leaves an organisation or a technology supplier relationship ends,” he says. “Automation can ensure that their accounts do not remain in an active state, thus eliminating a potential avenue through which bad actors can access data. When implemented properly, automated IAM solutions can also identify orphan accounts automatically and alert system owners.” Identity management systems comprise users, applications and policies, all of which govern how people are able to use software. Litton says automated IAM systems can fully automate identity creation at scale; automatically manage user access; apply role- and attribute-driven policies; and completely remove the need for passwords, helping to improve the user experience, while decreasing the helpdesk support burden.



Quote for the day:


"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis