[ad_1]
Each January on the SEI Weblog, we current the ten most-visited posts of the earlier 12 months. This 12 months’s prime 10 highlights our work in quantum computing, software program modeling, massive language fashions, DevSecOps, and synthetic intelligence. The posts, which had been printed between January 1, 2023, and December 31, 2023, are offered beneath in reverse order primarily based on the variety of visits.
#10 Contextualizing Finish-Person Wants: Easy methods to Measure the Trustworthiness of an AI System
by Carrie Gardner, Katherine-Marie Robinson, Carol J. Smith, and Alexandrea Steiner
As potential purposes of synthetic intelligence (AI) proceed to broaden, the query stays: will customers need the expertise and belief it? How can innovators design AI-enabled merchandise, companies, and capabilities which might be efficiently adopted, slightly than discarded as a result of the system fails to satisfy operational necessities, similar to end-user confidence? AI’s promise is certain to perceptions of its trustworthiness.
To highlight a couple of real-world situations, contemplate:
- How does a software program engineer gauge the trustworthiness of automated code era instruments to co-write purposeful, high quality code?
- How does a health care provider gauge the trustworthiness of predictive healthcare purposes to co-diagnose affected person circumstances?
- How does a warfighter gauge the trustworthiness of computer-vision enabled menace intelligence to co-detect adversaries?
What occurs when customers don’t belief these programs? AI’s potential to efficiently associate with the software program engineer, physician, or warfighter in these circumstances will depend on whether or not these finish customers belief the AI system to associate successfully with them and ship the result promised. To construct acceptable ranges of belief, expectations should be managed for what AI can realistically ship.
This weblog submit explores main analysis and classes realized to advance dialogue of find out how to measure the trustworthiness of AI so warfighters and finish customers basically can understand the promised outcomes.
Learn the submit in its entirety.
#9 5 Finest Practices from Trade for Implementing a Zero Belief Structure
by Matthew Nicolai, Nathaniel Richmond, and Timothy Morrow
Zero belief (ZT) structure (ZTA) has the potential to enhance an enterprise’s safety posture. There’s nonetheless appreciable uncertainty concerning the ZT transformation course of, nonetheless, in addition to how ZTA will in the end seem in apply. Current government orders M-22-009 and M-21-31 have accelerated the timeline for zero belief adoption within the federal sector, and plenty of non-public sector organizations are following swimsuit. In response to those government orders, researchers at the SEI’s CERT Division hosted Zero Belief Trade Days in August 2022 to allow trade stakeholders to share details about implementing ZT.
On this weblog submit, which we tailored from a white paper, we element 5 ZT greatest practices recognized throughout the two-day occasion, talk about why they’re vital, and supply SEI commentary and evaluation on methods to empower your group’s ZT transformation.
Learn the submit in its entirety.
#8 The Problem of Adversarial Machine Studying
by Matt Churilla, Nathan M. VanHoudnos, and Robert W. Beveridge
Think about driving to work in your self-driving automotive. As you strategy a cease signal, as an alternative of stopping, the automotive hastens and goes via the cease signal as a result of it interprets the cease signal as a pace restrict signal. How did this occur? Though the automotive’s machine studying (ML) system was educated to acknowledge cease indicators, somebody added stickers to the cease signal, which fooled the automotive into pondering it was a 45-mph pace restrict signal. This straightforward act of placing stickers on a cease signal is one instance of an adversarial assault on ML programs.
On this SEI Weblog submit, I look at how ML programs might be subverted and, on this context, clarify the idea of adversarial machine studying. I additionally look at the motivations of adversaries and what researchers are doing to mitigate their assaults. Lastly, I introduce a fundamental taxonomy delineating the methods through which an ML mannequin might be influenced and present how this taxonomy can be utilized to tell fashions which might be sturdy towards adversarial actions.
Learn the submit in its entirety.
#7 Play it Once more Sam! or How I Discovered to Love Giant Language Fashions
by Jay Palat
“AI won’t exchange you. An individual utilizing AI will.”
-Santiago @svpino
In our work as advisors in software program and AI engineering, we are sometimes requested concerning the efficacy of AI code assistant instruments like Copilot, GhostWriter, or Tabnine primarily based on massive language mannequin (LLM). Current innovation within the constructing and curation of LLMs demonstrates highly effective instruments for the manipulation of textual content. By discovering patterns in massive our bodies of textual content, these fashions can predict the following phrase to put in writing sentences and paragraphs of coherent content material. The priority surrounding these instruments is powerful – from New York colleges banning the usage of ChatGPT to Stack Overflow and Reddit banning solutions and artwork generated from LLMs. Whereas many purposes are strictly restricted to writing textual content, a couple of purposes discover the patterns to work on code, as effectively. The hype surrounding these purposes ranges from adoration (“I’ve rebuilt my workflow round these instruments”) to worry, uncertainty, and doubt (“LLMs are going to take my job”). Within the Communications of the ACM, Matt Welsh goes as far as to declare we’ve reached “The Finish of Programming.” Whereas built-in improvement environments have had code era and automation instruments for years, on this submit I’ll discover what new developments in AI and LLMs imply for software program improvement.
Learn the submit in its entirety.
#6 Easy methods to Use Docker and NS-3 to Create Sensible Community Simulations
by Alejandro Gomez
Generally, researchers and builders must simulate numerous forms of networks with software program that will in any other case be exhausting to do with actual gadgets. For instance, some {hardware} might be exhausting to get, costly to arrange, or past the abilities of the crew to implement. When the underlying {hardware} just isn’t a priority however the important capabilities that it does is, software program could be a viable different.
NS-3 is a mature, open-source networking simulation library with contributions from the Lawrence Livermore Nationwide Laboratory , Google Summer season of Code, and others. It has a excessive diploma of functionality to simulate numerous sorts of networks and user-end gadgets, and its Python-to-C++ bindings make it accessible for a lot of builders.
In some circumstances, nonetheless, it isn’t ample to simulate a community. A simulator would possibly want to check how information behaves in a simulated community (i.e., testing the integrity of Person Datagram Protocol (UDP) site visitors in a Wi-Fi community, how 5G information propagates throughout cell towers and consumer gadgets, and so forth. NS-3 permits such sorts of simulations by piping information from faucet interfaces (a characteristic of digital community gadgets offered by the Linux kernel that move ethernet frames to and from consumer house) into the operating simulation.
This weblog submit presents a tutorial on how one can transmit dwell information via an NS-3-simulated community with the added benefit of getting the data-producing/data-receiving nodes be Docker containers. Lastly, we use Docker Compose to automate complicated setups and make repeatable simulations in seconds.
Learn the submit in its entirety.
#5 5 Challenges to Implementing DevSecOps and Easy methods to Overcome Them
by Joe Yankel and Hasan Yasar
Traditionally, software program safety has been addressed on the mission degree, emphasizing code scanning, penetration testing, and reactive approaches for incident response. Not too long ago, nonetheless, the dialogue has shifted to this system degree to align safety with enterprise goals. The perfect end result of such a shift is one through which software program improvement groups act in alignment with enterprise targets, organizational danger, and resolution architectures, and these groups perceive that safety practices are integral to enterprise success. DevSecOps, which builds on DevOps rules and locations extra give attention to safety actions all through all phases of the software program improvement lifecycle (SDLC), might help organizations understand this perfect state. Nevertheless, the shift from project- to program-level pondering raises quite a few challenges. In our expertise, we’ve noticed 5 widespread challenges to implementing DevSecOps. This SEI Weblog submit articulates these challenges and offers actions organizations can take to beat them.
Learn the submit in its entirety.
#4 Utility of Giant Language Fashions (LLMs) in Software program Engineering: Overblown Hype or Disruptive Change?
by Ipek Ozkaya, Anita Carleton, John E. Robert, and Douglas Schmidt (Vanderbilt College)
Has the day lastly arrived when massive language fashions (LLMs) flip us all into higher software program engineers? Or are LLMs creating extra hype than performance for software program improvement, and, on the similar time, plunging everybody right into a world the place it’s exhausting to tell apart the superbly fashioned, but generally faux and incorrect, code generated by synthetic intelligence (AI) packages from verified and well-tested programs?
This weblog submit, which builds on concepts launched within the IEEE paper Utility of Giant Language Fashions to Software program Engineering Duties: Alternatives, Dangers, and Implications by Ipek Ozkaya, focuses on alternatives and cautions for LLMs in software program improvement, the implications of incorporating LLMs into software-reliant programs, and the areas the place extra analysis and improvements are wanted to advance their use in software program engineering.
Learn the submit in its entirety.
#3 Rust Vulnerability Evaluation and Maturity Challenges
by Garret Wassermann and David Svoboda
Whereas the reminiscence security and safety features of the Rust programming language might be efficient in lots of conditions, Rust’s compiler may be very explicit on what constitutes good software program design practices. Every time design assumptions disagree with real-world information and assumptions, there may be the potential for safety vulnerabilities–and malicious software program that may make the most of these vulnerabilities. On this submit, we are going to give attention to customers of Rust packages, slightly than Rust builders. We are going to discover some instruments for understanding vulnerabilities whether or not the unique supply code is out there or not. These instruments are essential for understanding malicious software program the place supply code is commonly unavailable, in addition to commenting on attainable instructions through which instruments and automatic code evaluation can enhance. We additionally touch upon the maturity of the Rust software program ecosystem as a complete and the way which may impression future safety responses, together with through the coordinated vulnerability disclosure strategies advocated by the SEI’s CERT Coordination Heart (CERT/CC). This submit is the second in a sequence exploring the Rust programming language. The first submit explored safety points with Rust.
Learn the submit in its entirety.
#2 Software program Modeling: What to Mannequin and Why
by John McGregor and Sholom G. Cohen
Mannequin-based programs engineering (MBSE) environments are meant to assist engineering actions of all stakeholders throughout the envisioning, growing, and sustaining phases of software-intensive merchandise. Fashions, the machine-manipulable representations and the merchandise of an MBSE setting, assist efforts such because the automation of standardized evaluation methods by all stakeholders and the upkeep of a single authoritative supply of fact about product data. The mannequin faithfully represents the ultimate product in these attributes of curiosity to numerous stakeholders. The result’s an general discount of improvement dangers.
When initially envisioned, the necessities for a product could appear to signify the proper product for the stakeholders. Throughout improvement, nonetheless, the as-designed product involves mirror an understanding of what’s actually wanted that’s superior to the unique set of necessities. When it’s time to combine elements, throughout an early incremental integration exercise or a full product integration, the unique set of necessities is now not represented and is now not a sound supply of take a look at circumstances. Many questions come up, similar to
- How do I consider the failure of a take a look at?
- How can I consider the completeness of a take a look at set?
- How do I monitor failures and the fixes utilized to them?
- How do I do know that fixes utilized don’t break one thing else?
Such is the case with necessities, and far the identical needs to be the case for a set of fashions created throughout improvement—are they nonetheless consultant of the carried out product present process integration?
One of many targets for sturdy design is to have an up-to-date single authoritative supply of fact through which discipline-specific views of the system are created utilizing the identical mannequin parts at every improvement step. The one authoritative supply will usually be a group of requirement, specification, and design submodels throughout the product mannequin. The ensuing mannequin can be utilized as a sound supply of full and proper verification and validation (V&V) actions. On this submit, we look at the questions above and different questions that come up throughout improvement and use the solutions to explain modeling and evaluation actions.
Learn the submit in its entirety.
#1 Cybersecurity of Quantum Computing: A New Frontier
by Tom Scanlon
Analysis and improvement of quantum computer systems continues to develop at a fast tempo. The U.S. authorities alone spent greater than $800 million on quantum data science (QIS) analysis in 2022. The promise of quantum computer systems is substantial – they may be capable to remedy sure issues which might be classically intractable, that means a traditional laptop can’t full the calculations inside human-usable timescales. Given this computational energy, there may be rising dialogue surrounding the cyber threats quantum computer systems might pose sooner or later. As an example, Alejandro Mayorkas, secretary of the Division of Homeland Safety, has recognized the transition to post-quantum encryption as a precedence to make sure cyber resilience. There’s little or no dialogue, nonetheless, on how we are going to shield quantum computer systems sooner or later. If quantum computer systems are to grow to be such priceless belongings, it’s cheap to mission that they may finally be the goal of malicious exercise.
I used to be not too long ago invited to be a participant within the Workshop on Cybersecurity of Quantum Computing, co-sponsored by the Nationwide Science Basis (NSF) and the White Home Workplace of Science and Expertise Coverage, the place we examined the rising area of cybersecurity for quantum computing. Whereas quantum computer systems are nonetheless nascent in some ways, it’s by no means too early to deal with looming cybersecurity issues. This submit will discover points associated to creating the self-discipline of cyber safety of quantum computing and description six areas of future analysis within the area of quantum cybersecurity.
Learn the submit in its entirety.
Wanting Forward in 2024
We publish a brand new submit on the SEI Weblog each Monday morning. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, cybersecurity, and edge computing.
[ad_2]