Broad Agency Announcement (BAA)
Requirements (FY2018) Update #1

This is an update and revised reissuance of BAA FY2018 topic #4: 4. Pervasive Learning System (PERLS) – Personal Self-Regulated Learning and Training

The ADL Initiative requests five-page white paper responses against a revised Topic 4 (below) solicitation. Offerors who previously submitted to this topic are encouraged to revise and resubmit their white papers. Offerors should otherwise follow the guidelines in the original BAA solicitations.

White Papers are requested by 18 May 2018 (submissions received by 0759 ET on 21 May 2018 will be included in this review process).

This solicitation is for maturing the existing PERLS software (described below). This software will be provided as Government Off-The-Shelf (GOTS) software (Distribution Statement C with ADL Initiative permission) upon contract award. The ADL Initiative will make this software available for offeror examination, by appointment, from 9 April to 11 May 2018. Offerors interested in reviewing the software must send a request via email to

The ADL Initiative will hold two virtual industry days via GoToMeeting, on 18 and 19 April at 1300-1430 ET. (Both meetings are intended to cover the same topics, but are offered twice for scheduling purposes.) To register for a meeting send a request via email to

White Papers should be submitted directly to the email address,; this email group includes Washington Headquarters Services (WHS)/Acquisition Directorate (AD) contracts personnel (Mr. Weedin and Mr. Slagle) and ADL Initiative Points of Contact (POC) (Dr. Schatz, Dr. Brewer, and Mr. Bizub).

WHS/AD Contracting POCs:
Mr. Anthony Weedin, Team Lead
(703) 545-6963

Mr. Steven Slagle, Branch Chief
(703) 545-1574

ADL Initiative POCs:
Dr. Sae Schatz

Dr. Van Brewer
External R&D Lead

Mr. Warren Bizub
Operations Lead


4 (revised). Pervasive Learning System (PERLS) – Personal Self-Regulated Learning and Training

The ADL Initiative seeks proposals for the continued maturation, interoperability integration, and applied testing of the PERLS personal assistant for learning capability.

The ADL Initiative has developed a capability for self-regulated learning, that is, a mobile personal assistant for learning. The system, called “PERLS,” was developed through prior research into adaptive, distributed micro-learning theory. The prototype uses a native iOS application combined with server-based content and processes to present personalized instruction to learners, adapted via machine learning, based upon learner interaction cues, learning phases, and context awareness. The technical architecture uses both micro-services architecture, for flexible and agile software, and graph databases for flexible and real-time recommendation engines. The current prototype meets criteria between Technology Readiness Levels 5 and 6.

Proposals should evolve the prototype capability to a mature Technology Readiness Level 6, to support near-term transition of this capability into operational use.

At each proposed major stage of development, the work shall be completed in such a manner that the PERLS system is functional and documented to such a degree that it can be set up and implemented from documentation only. Offerors will be required to define (and meet) specific measures of performance for each milestone.

Although not necessarily limited to these, proposals must address all of the following:

  • Mature the PERLS capability. Continue development of the Government-owned software code base to include cross-platform capability (i.e., desktop computer access and, when appropriate in the development cycle, support for other mobile OSs, such as Android), cross-application recommendations, enhancement and validation of the accuracy of recommender system, and content crowdsourcing features. This also includes maturation and documentation of APIs for third party content, including data, granularity and content structure, and enhancing management of users, roles, and content. (Note, the PERLS software must use established learning technology specifications and standards for APIs and data formats, such as xAPI, where applicable. The existing source code already integrates these, and future preferred specifications and standards will be defined under the Total Learning Architecture [TLA] specifications set.)
  • Develop authoring/administration capabilities. Mature the existing content authoring and management system to allow various user roles to create and manage learning content—as will be necessary and appropriate to support an active user base. Specific needs include capabilities for content coverage and gap-analysis, content production support, metadata management, usage analytics, migration functions, version control, and content policy enforcement. Authoring and administration functionality must be validated as appropriate and suitable for use by a representative user group and in accordance with modern industry expectations.
  • Scaled up pilot tests. As follow-on to pilot tests to those conducted in FY17 (which involved formative prototype testing based on a Kirkpatrick level 1 evaluation), conduct subsequent prototype testing at, at least, Kirkpatrick level 2 evaluation, with a minimum of 50 users over the course of a minimum of 2 months trial. Such a trial will include sufficient and varied content as is required to support a prototype test scenario. The test must also validate and demonstrate the adaptability of the recommender system to provide user adapted content. The ADL Initiative has already identified organizational partners to participate in the testing, which will be provided to offerors.

Proposals should address the following incremental capabilities as Options. (Related topics can be combined into the same Option; separate Options are not required for each item).

  • Determine best practices for self-regulated learning content, including documentation of (a) methods and concepts for converting content into micro-content, the development of a standard for content refactoring, and (b) a methodology and tools for planning, acquiring, transforming, ingesting, and analyzing the efficacy of a corpus.
  • Provide documentation for methods, policies, and procedures for development of micro-learning content appropriate for self-regulated learning systems; provide documentation on all systems, applications, services, languages, and licenses used in the development of PERLS, including certification that all licenses are legal and appropriate to be included and conveyed to the government owned source code. Thorough documentation of the system architecture including and a code base that is commented as appropriate to provide an intelligible code structure. Industry accepted standard for architecture documentation, system architecture, and code commenting shall be established. All methods, techniques, and any processes that are necessary for setup, management, and maintenance of the PERLS system should be documented based on an industry accepted standard.
  • Define an approach for PERLS functioning in an offline scenario. That is, document the requirements for enabling PERLS to function, intermittently, without network connectivity. This would necessarily address recording, recommending, and synchronizing content and learning records.
  • Integration with external content sources. Demonstrate PERLS integration of learning content from an external platform. This will be enabled through the use of well-designed and documented APIs. Demonstrate this integration with at least three (3) third-party learning content sources, which should be incorporated into the PERLS recommendation system side-by-side with its overall corpus of internal content. The ADL Initiative can provide recommendations and assistance in identifying third-party learning content sources, if offerors require.

Offerors should consider integrating their PERLS efforts with the ADL Initiative’s ongoing TLA project, including with its annual testing event (which, this year, is scheduled for August 2018). PERLS would serve as an “activity provider” in the larger TLA project. (For more details on the TLA architecture, contact the ADL Initiative.) Participation in the TLA provides technical integration testing opportunities to verify the operation of interoperability specifications, an empirical testing population and established research processes, and opportunities to integrate with external learning content. Integration with the TLA project would require the following tasks. Offerors should ensure that these items do not distract from the primary goal of maturating the PERLS capability.

  • Facilitation of TLA empirical testing. Coordinate with TLA project performers, to support technical integration. This may include providing documentation for PERLS APIs for integration with external content sources, exchanging learning data, and participating in technical integration events.
  • Contributions to TLA specifications documents. As appropriate and applicable, provide feedback or other contributions to the documentation for the TLA project, such as xAPI profiles, data protocols, architecture recommendations, or other APIs.