Installation instructions

Prerequisites

  1. Install the Rust toolchain via rustup on rustup.rs, so you have the package manager cargo and the necessary tools to compile plx from the crates.io package.

  2. C or C++: standard build tools for C/C++, (gcc and g++)

  3. Configure $EDITOR environment variable to define your IDE. This way PLX can auto start your IDE

    1. Currently, we only support code, codium, idea.
      1. For now this feature is unstable for terminal based editors. Feel free to create a new issue if you do not see your favorite IDE here.
    2. On Mac and Linux you should change your shell configuration (~/.bashrc for ex.) with a line like export EDITOR=<ide>
    3. On Windows setx /m EDITOR <ide> (check it worked you can run echo %EDITOR% in a new terminal)
    4. If it doesn't work, make sure to reload your shell
    5. When you enter an exo your $EDITOR will automatically open the correct file

Installation for students and teachers

We do not provide binaries for the project so you have to compile it yourself but it is easy via cargo.

To install PLX without cloning the repository, you can install and compile it from crates.io with

cargo install plx

And then just run the plx command in your terminal. (If it is not found make sure you restart your terminal or check if ~/.cargo/bin/ is in your $PATH).

Note: we might provide binaries later for ease of installation, it is not a priority right now.

Why ?

The age of features is over, we are living in the age of experiences.
Aral Balkan, during a UX conference titled "Superheroes & Villains in Design".

Ce n'est pas juste un projet stylé parce que le Rust c'est hype, qu'il y a un mode watch super réactif, un feedback riche... on développe une nouvelle expérience d'apprentissage pour s'approcher de la pratique délibérée en informatique !!

Pourquoi

Les exercices de code sont au coeur de l'apprentissage d'un language de programmation, cependant avoir des exercices avec des petits programmes ou fonctions à implémenter ne garantit pas que l'expérience de pratique sera efficiente. Selon la pratique délibérée, l'apprentissage profond passe par une boucle de feedback la plus courte possible, or l'expérience actuelle est loin d'être fluide et efficace.

Prenons l'exemple d'un exercice un petit programme en C qui demande le prénom, nom et l'âge et affiche une phrase incluant ces 2 valeurs. L'exo fourni dans un PDF, inclut une consigne, un bout de code de départ et un exemple d'exécution, ainsi qu'un code de solution sur la page suivante.
Pour résoudre l'exercice, une fois la consigne lue, nous allons ouvrir un IDE, créer un fichier main.c manuellement, copier-coller le code de départ, lire le code existant et compléter les parties à développer.
Une fois terminé, passons à la compilation en ouvrant un terminal dans l'IDE en tapant gcc main main.c & main, euh zut c'était gcc -o main main.c && ./main, on rentre prénom, nom et age, puis comparons l'output manuellement pour voir si c'est bien le résultat attendu, réouvrons la consigne et non il manque l'affichage de l'âge! Revenons au code, on ajoute l'âge et on relance le build et l'exécution, on rentre prénom, nom et âge à nouveau. Est-ce que l'output est bon cette fois ? Vérifions maintenant notre code avec la solution. Okay, on aurait pu utiliser printf au lieu de 2 fois puts() pour afficher le nom complet. Passons à l'exo suivant, cherchons sa consigne, la voilà, on recommence le cycle,...

Tous ces petites étapes supplémentaires autour de la rédaction du code semblent insignifiantes à première vue mais leur cumul résulte en une friction générale importante. En plus, il n'y aura que peu d'exécutions manuels c'est-à-dire très peu d'occasions de connaître la progression et d'ajuster son code au fur et à mesure, en plus d'une petite charge mentale pour compiler et lancer à la main.

Imaginons que dans un laboratoire de C nous développions maintenant une bataille navale dans le terminal. Tester de bout en bout de manière automatique un programme en C n'est pas une tâche évidente, en partie par manque d'outil adapté. Pour tester le fonctionnement global, il faut manuellement lancer une partie et jouer plusieurs coups pour vérifier le fonctionnement et vérifier à chaque étape si le jeu est cohérent dans son état et affichage. Une fois qu'une partie du jeu fonctionne, en développant le reste on risque de casser d'autres parties sans s'en rendre compte.

Un dernier cas concret, en développant un petit shell en C++, pour tester l'implémentation des pipes, il faudra compiler le shell ainsi que les CLIs accessibles, lancer le shell, puis taper quelques commandes du type echo hey there | toupper voir si l'output est bien HEY THERE, ce qui est très lent! Tester plein de plein de cas limites (plusieurs pipes, symbole de pipe collé, redirection stdout et non stderr, exit du CLI à droite du pipe, ...)

En résumé, le manque de validation automatisée ralentit le développement et l'apprentissage. Simplement ajouter des tests automatisés ne résoud pas tous les problèmes, car les tests runner ne sont pas adaptés à des tests sur des exos (pas de consigne, pas d'indices, affichage pas adapté, pas de mode watch, ...), il manque une partie d'automatisation autour. De plus, le travail d'écriture de tests pour des tous petits exos serait beaucoup trop conséquent, dans plein de cas comparer l'output avec une solution suffit à estimer si le programme est fonctionnel.

Expérience PLX

Le défi est d'arriver à réduire la friction au strict minimum, d'automatiser toutes les étapes administratives et de fournir un feedback riche, automatique et rapide durant l'entrainement.

Cette expérience sera atteinte via

  1. La suppression des étapes de compilation et d'exécution manuelles
    Aucune connaissance du système de compilation ou de ses commandes n'est nécessaire, tout se fait automatiquement dès qu'un des fichiers est sauvé (il suffit donc de taper Ctrl+S ou d'attendre que l'IDE sauve automatiquement)
  2. La suppression de toutes les rédactions manuelles de valeurs dans le terminal
    Permettre de définir des arguments du programme et un contenu à injecter en stdin, avec des variantes pour tester différents cas.
  3. La suppression des étapes de comparaison d'output
    L'output sera automatiquement comparé et une diff précise (avec surlignage des différences sur chaque ligne) sera affichée pour voir immédiatemment les différences. La diff pourrait supporter du trimming de l'output ou des lignes afin d'ignorer certains espaces blancs insignifiants. Les retours à la ligne et tabulations seront affichées avec un symbole visible.
  4. Une affichage et comparaison avec solution
    Une fois l'exo résolu, pouvoir auto évaluer sa réponse avec la solution d'un prof est déjà d'une grande aide. Il sera possible de voir une diff de sa réponse avec la solution directement dans PLX.
  5. Une transition fluide entre exos
    Passer à l'exo suivant devrait prendre moins de 4 secondes, le temps de passer de l'IDE à PLX (Alt+Tab), d'un raccourci (n) dans PLX pour afficher l'exo suivant et le temps que l'IDE réagisse à la demande d'ouverture du fichier.
  6. Aucun changement de fenêtre durant l'exo
    PLX à gauche avec toute la consigne, l'IDE à droite dans un seul fichier utile, une fois les 2 fenêtres ouvertes, il n'y a plus de changement à faire comme tout est déjà disponible. La consigne s'affiche dans PLX, dès que le fichier ouvert est sauvé, le build et l'exécution se relance. Les erreurs de build sont visibles ainsi que les résultats des tests.

Contexte

Ce projet tire inspiration de Rustlings qui permet de s'habituer aux erreurs du compilateur Rust en corrigeant des problèmes de compilation ou en complétant une centaine de petits exercices. Dans la même idée, d'autres languages ont suivis avec golings, cplings, ziglings, ... Ce même projet a inspirée PRJS (Practice Runner for JavaScript), développée à l'occasion du dernier labo libre de WEB et qui permet de s'entrainer sur des fonctions vérifiées via des tests unitaires écrits et lancés avec Vitest en arrière plan.

PLX pousse encore plus loin l'expérience en supportant plusieurs languages, en y incluant la compilation automatique ainsi que le support de types de tests plus primitifs et plus simple à mettre en place qu'avec un framework de test.

Note: contrairement à Rustlings, ce repository ne contient pas d'exercices réels, seulement le code de la TUI PLX. Des exercices de démonstration seront écrits dans différents languages dans un sous dossier examples.

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

Unreleased

Added

Changed

0.2.0 - 2024-09-06

Added

  • Home page with ASCII art, quick help and tagline
  • Help page to see all shortcuts and alternatives
  • List page to browse skills and exos list
  • Exo preview inside List page to show exo instruction without starting to compile it
  • Train page to do an exo from start to end
  • Automatic reload of checks when saving one of the exo files
  • Open $EDITOR (only GUI IDE right now) when opening a new exo
  • Show checks details with a nice word level diff generated by similar.
  • Show a Solution page with basic syntax highlighting via syntect
  • Support an exo structure with course - skills - exo hierarchy, with metadata described in TOML
  • Enable parsing TOML files directly into Rust structs using serde and toml crates.
  • Generate a lot of various errors for parsing, process and workers execution. We don't display them yet. When running plx in a folder without course.toml, the TUI will not start. It only displays skills and exos that were successfully parsed.
  • Create examples exos for manual and automated testings under folder examples
  • Switch to next exo when exo is done
  • Start to store exo state (In progress, Done) but this is not fully working, therefore not displayed
  • Create a logo ! Inspired from Delibay and PRJS gradient styles
  • Write logs to debug.log via crate log and simplelog to see events received by core and not make noise in the UI

Changed

  • Run CI with macos runner, in addition to Ubuntu and Windows
  • Rewrite a concise README in English and include it for crates.io release

0.1.2 - 2024-08-26

Added

  • Trivial change to show CI release

0.1.1 - 2024-08-21

Added

  • Add CI/CD jobs for build+test+formatting+tag+release (this release is used to test everything is working).
  • Do a small change in main.rs output to see changes

0.1.0 - 2024-08-19

Added

  • Create an empty Rust crate to reserve the name on crates.io with Markdown excluded (it will be reincluded later) when the main README will be rewritten in English and be shorter...
  • Define license-file = "LICENSE" in Cargo.toml and create a LICENSE file with All rights reserved mention, just to be able to run cargo publish. There is no SPDX license identifier for "proprietary".
  • Write first version of README in french with WHY and context details, in addition to the learning experience and the planned features.

Why ?

The age of features is over, we are living in the age of experiences.
Aral Balkan, during a UX conference titled "Superheroes & Villains in Design".

Ce n'est pas juste un projet stylé parce que le Rust c'est hype, qu'il y a un mode watch super réactif, un feedback riche... on développe une nouvelle expérience d'apprentissage pour s'approcher de la pratique délibérée en informatique !!

Pourquoi

Les exercices de code sont au coeur de l'apprentissage d'un language de programmation, cependant avoir des exercices avec des petits programmes ou fonctions à implémenter ne garantit pas que l'expérience de pratique sera efficiente. Selon la pratique délibérée, l'apprentissage profond passe par une boucle de feedback la plus courte possible, or l'expérience actuelle est loin d'être fluide et efficace.

Prenons l'exemple d'un exercice un petit programme en C qui demande le prénom, nom et l'âge et affiche une phrase incluant ces 2 valeurs. L'exo fourni dans un PDF, inclut une consigne, un bout de code de départ et un exemple d'exécution, ainsi qu'un code de solution sur la page suivante.
Pour résoudre l'exercice, une fois la consigne lue, nous allons ouvrir un IDE, créer un fichier main.c manuellement, copier-coller le code de départ, lire le code existant et compléter les parties à développer.
Une fois terminé, passons à la compilation en ouvrant un terminal dans l'IDE en tapant gcc main main.c & main, euh zut c'était gcc -o main main.c && ./main, on rentre prénom, nom et age, puis comparons l'output manuellement pour voir si c'est bien le résultat attendu, réouvrons la consigne et non il manque l'affichage de l'âge! Revenons au code, on ajoute l'âge et on relance le build et l'exécution, on rentre prénom, nom et âge à nouveau. Est-ce que l'output est bon cette fois ? Vérifions maintenant notre code avec la solution. Okay, on aurait pu utiliser printf au lieu de 2 fois puts() pour afficher le nom complet. Passons à l'exo suivant, cherchons sa consigne, la voilà, on recommence le cycle,...

Tous ces petites étapes supplémentaires autour de la rédaction du code semblent insignifiantes à première vue mais leur cumul résulte en une friction générale importante. En plus, il n'y aura que peu d'exécutions manuels c'est-à-dire très peu d'occasions de connaître la progression et d'ajuster son code au fur et à mesure, en plus d'une petite charge mentale pour compiler et lancer à la main.

Imaginons que dans un laboratoire de C nous développions maintenant une bataille navale dans le terminal. Tester de bout en bout de manière automatique un programme en C n'est pas une tâche évidente, en partie par manque d'outil adapté. Pour tester le fonctionnement global, il faut manuellement lancer une partie et jouer plusieurs coups pour vérifier le fonctionnement et vérifier à chaque étape si le jeu est cohérent dans son état et affichage. Une fois qu'une partie du jeu fonctionne, en développant le reste on risque de casser d'autres parties sans s'en rendre compte.

Un dernier cas concret, en développant un petit shell en C++, pour tester l'implémentation des pipes, il faudra compiler le shell ainsi que les CLIs accessibles, lancer le shell, puis taper quelques commandes du type echo hey there | toupper voir si l'output est bien HEY THERE, ce qui est très lent! Tester plein de plein de cas limites (plusieurs pipes, symbole de pipe collé, redirection stdout et non stderr, exit du CLI à droite du pipe, ...)

En résumé, le manque de validation automatisée ralentit le développement et l'apprentissage. Simplement ajouter des tests automatisés ne résoud pas tous les problèmes, car les tests runner ne sont pas adaptés à des tests sur des exos (pas de consigne, pas d'indices, affichage pas adapté, pas de mode watch, ...), il manque une partie d'automatisation autour. De plus, le travail d'écriture de tests pour des tous petits exos serait beaucoup trop conséquent, dans plein de cas comparer l'output avec une solution suffit à estimer si le programme est fonctionnel.

Expérience PLX

Le défi est d'arriver à réduire la friction au strict minimum, d'automatiser toutes les étapes administratives et de fournir un feedback riche, automatique et rapide durant l'entrainement.

Cette expérience sera atteinte via

  1. La suppression des étapes de compilation et d'exécution manuelles
    Aucune connaissance du système de compilation ou de ses commandes n'est nécessaire, tout se fait automatiquement dès qu'un des fichiers est sauvé (il suffit donc de taper Ctrl+S ou d'attendre que l'IDE sauve automatiquement)
  2. La suppression de toutes les rédactions manuelles de valeurs dans le terminal
    Permettre de définir des arguments du programme et un contenu à injecter en stdin, avec des variantes pour tester différents cas.
  3. La suppression des étapes de comparaison d'output
    L'output sera automatiquement comparé et une diff précise (avec surlignage des différences sur chaque ligne) sera affichée pour voir immédiatemment les différences. La diff pourrait supporter du trimming de l'output ou des lignes afin d'ignorer certains espaces blancs insignifiants. Les retours à la ligne et tabulations seront affichées avec un symbole visible.
  4. Une affichage et comparaison avec solution
    Une fois l'exo résolu, pouvoir auto évaluer sa réponse avec la solution d'un prof est déjà d'une grande aide. Il sera possible de voir une diff de sa réponse avec la solution directement dans PLX.
  5. Une transition fluide entre exos
    Passer à l'exo suivant devrait prendre moins de 4 secondes, le temps de passer de l'IDE à PLX (Alt+Tab), d'un raccourci (n) dans PLX pour afficher l'exo suivant et le temps que l'IDE réagisse à la demande d'ouverture du fichier.
  6. Aucun changement de fenêtre durant l'exo
    PLX à gauche avec toute la consigne, l'IDE à droite dans un seul fichier utile, une fois les 2 fenêtres ouvertes, il n'y a plus de changement à faire comme tout est déjà disponible. La consigne s'affiche dans PLX, dès que le fichier ouvert est sauvé, le build et l'exécution se relance. Les erreurs de build sont visibles ainsi que les résultats des tests.

Contexte

Ce projet tire inspiration de Rustlings qui permet de s'habituer aux erreurs du compilateur Rust en corrigeant des problèmes de compilation ou en complétant une centaine de petits exercices. Dans la même idée, d'autres languages ont suivis avec golings, cplings, ziglings, ... Ce même projet a inspirée PRJS (Practice Runner for JavaScript), développée à l'occasion du dernier labo libre de WEB et qui permet de s'entrainer sur des fonctions vérifiées via des tests unitaires écrits et lancés avec Vitest en arrière plan.

PLX pousse encore plus loin l'expérience en supportant plusieurs languages, en y incluant la compilation automatique ainsi que le support de types de tests plus primitifs et plus simple à mettre en place qu'avec un framework de test.

Note: contrairement à Rustlings, ce repository ne contient pas d'exercices réels, seulement le code de la TUI PLX. Des exercices de démonstration seront écrits dans différents languages dans un sous dossier examples.

Introduction

PLX is a project developed to enhance the learning of programming languages, with a focus on a smooth and optimized learning experience. The goal of this project is to reduce the usual friction involved in completing coding exercises (such as manual compilation, running, testing, and result verification) by automating these steps.

PLX offers a terminal user interface (TUI) developed in Rust and supports multiple languages (currently C and C++). It enables automatic compilation as soon as a file is saved, automated checks to compare program outputs, and instant display of errors and output differences. The solution code can also be displayed. The project draws inspiration from Rustlings and aims to create a more efficient learning experience, particularly for programming courses at HEIG-VD.

Features

We described a lot of details about the a better experience, problems of the current experience, but here is a complete central list of features we need to develop during PDG and some other ideas for later. Some of them will have a dedicated issue on GitHub, but this is easier to see the global picture and progress here.

Functionnal requirements

FeatureStatusDescription
View the Home pageDONE
View the List pageDONEWith the list of skills and exos
C++ exo build+execution, but without configuration and without build folder visibleDONESupport compiling C++ in single or multi files via Xmake without config in a separated build directory
Java exo build+execution, but without configuration and without build folder visibleTODOSupport compiling Java in single or multi files with javac
Exo creation with one main file or possibly more starting filesDONESupport a way to describe those metadata and indicate which files are relevant.
Definition of automated check verifying outputsDONE
Run automated output checks on starting filesDONERun check, generate result and diff if it differs
Run automated output checks on solution filesDONEAdapt the compilation to build the solution files instead, and do the same things, after having checked the base files.
Execute a check on a given binary fileDONECheck and display if exercices passed or failed
Show why checks are failingDONEShow why exercice failed (diff output / solution)
Start of the appDONEIt's possible to resume to the last exo or to the next logical one by pressing r.
Preview of exosDONEWhen searching for an exercice to do -> a preview of the exo with the metadata but do not run compilation.
Save and restore exos statesTODOSave exos states (done / not done / in progress) and restore it on subsequent app launches. Shows immediately the states in list with colors. Enable exo resuming.
Code Editor openingDONEOpen code editor when launching or switch to another exercice, the editor is defined via $EDITOR
Provide integrated documentationDONEPress ?to get an integrated documentation of all the keybinds available

Non functionnal requirements

  1. It should be easy to create new exos and maintain them, converting an existing C++ exo should take less than 2 minutes.
  2. The watcher should be performant: it should only watch files that could be modified by the student or teacher, it should take less than 100ms to see detect a change.
  3. PLX should be usable during exo creation too to make sure the checks are passing on
  4. Once an exo is opened, with one IDE window at right and the terminal with PLX at left, the students should not need to open or move other windows and should be able to only Alt+Tab. All the automable steps should be automated to focus on learning tasks (including build, build configuration, running, output diffing, manual entries in terminal, detecting when to run, showing solution, switching to next exo).
  5. Switching to next exo should take less than 10 seconds. After this time: the IDE should be opened with the new file, and PLX should show the new exo details.
  6. Trivial exo files shoud not need any build configuration, PLX should be able to guess how to build the target with available files.
  7. Cross-plateform comptability meaning that PLX should work on all linux, windows and MAC machines.
  8. PLX shoud be designed in a modular way that allows for any easy addition of the features.
  9. Compiling an exercice should take less than (10 seconds).
  10. When saving a file, the compilation starts. If a compilation is already running when saving a file, it should kill the actual compilation and launch a new compilation.
  11. When launching the tests, if tests are already running they should be stopped and relaunched again.
  12. PLX must have a file watcher and file parser to be able to watch the edited file(s) and return it states. This is necessary to be able to flag the exercice (in progress, done, not started) and return errors descriptions or status (passed / failed) of the exercice.

Architecture

Running architecture

app system

workflow

File architecture

.
├── .course-state.toml # save index skill and exo index
├── course.toml # define your course info and skill order
├── pointers
│   ├── crash-debug
│   │   ├── .exo-state.toml # save status of exo
│   │   ├── exo.toml # exo definition
│   │   ├── main.c
│   │   └── main.sol.c
│   ├── crash-debug-java
│   │   ├── .exo-state.toml
│   │   ├── exo.toml
│   │   ├── Main.java
│   │   ├── Main.sol.java
│   │   └── Person.java
│   └── skill.toml # define your skill info and exo order
├── other skills
├── build
   ...

Defining all toml

course.toml

To create a course we need a name and a selection of skills (chapters) in a specific order.

name = "Full fictive course"
skills = ["intro", "pointers", "parsing", "structs", "enums"]

skill.toml

To create a new skill, we need a name and a list of all the names of each exercices in a specific order.

name = 'Introduction'
exos = ['basic-args', 'basic-output']

exo.toml

For each exercice we have :

  • a name
  • the instructions
  • all the needed checks to be done
name = 'Basic arguments usage'
instruction = 'The 2 first program arguments are the firstname and number of legs of a dog. Print a full sentence about the dog. Make sure there is at least 2 arguments, print an error if not.'
[[checks]]
name = 'Joe + 5 legs'
args = ["Joe", "5"]
test = {type= "output", expected = "The dog is Joe and has 5 legs" }
[[checks]]
name = 'No arg -> error'
test = {type= "output", expected = "Error: missing argument firstname and legs number"}
[[checks]]
name = 'One arg -> error'
args = ["Joe"]
test = {type = "output", expected = "Error: missing argument firstname and legs number"}

Generated toml

.course-state.toml

This state is used to save the skill and exercise index for resume.

curr_skill_idx = 0
curr_exo_idx = 0

.exo-state.toml

For each exercice, there is a state that can be in-progress, done or not done. There is also an optional favorite option to place the exercice into our personal selection.

state = "InProgress"
favorite = false

Mockups

Case study with the classic experience

Let's consider a typical coding exercise that David needs to resolve:


Dog message

Write a small program that displays this message built with the first 2 args. You don't need to check the second arg to be a number.

Example execution:

> ./dog 
Error: missing argument firstname and legs number
> ./dog Joe 4
The dog is Joe and has 4 legs
Solution
#include <stdio.h>

int main(int argc, char **argv) {
  if (argc < 3)
    printf("Error: missing argument firstname and legs number");
  else
    printf("The dog is %s and has %s legs\n", argv[1], argv[2]);
}

To solve this exercise, David first read the instruction, then open his IDE, manually create a main.c file, copy-paste the starter code, read the existing code and complete the parts that need to be developed. Once he believes the code is ready, David compiles it by opening a terminal in the IDE and typing gcc dog main.c — oops! it should have been gcc -o dog main.c

PS C:\Users\david\CLionProjects\dog> gcc dog main.c
c:/program files/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/12.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find dog: No such file or directory
collect2.exe: error: ld returned 1 exit status
PS C:\Users\david\CLionProjects\dog> gcc -o dog main.c
PS C:\Users\david\CLionProjects\dog> ./dog Joe 4
The dog is Joe and has legs

After this, David input the name and number of legs and compares the output manually to see if it matches the expected result. Opening the instruction again, he realizes that the number of legs is not displayed! He returns to the code, adds the age variable, recompiles, and runs the program again, entering the name and number of legs again. This time, is the output correct? Now David checks his code against the solution... Okay, he could have used printf instead of puts() twice to display the full name. Moving on to the next exercise, he searches for the instruction, and the cycle repeats...

All these additional steps around writing the code may seem insignificant at first glance, but their accumulation results in considerable friction. Additionally, there will be very few manual executions, meaning limited opportunities to gauge progress and adjust the code accordingly, coupled with the mental burden of compiling and running the program manually.

classic-xp.opti.svg This workflow resume all different steps that David performs until he finish the exercise.

Case study with the PLX experience

Let's consider a struct exercise that Alice needs to resolve:


2. Basic dog struct

Write a basic dog structure with a name and legs number. Print given dog with first 2 args. If -n ask to define a new dog, and if -s show the size of the struct.

Solution
#include <stdio.h>  
#include <stdlib.h>  
#include <string.h>  

typedef struct {  
   const char *firstname;  
   short legs_number;  
} Dog;  

void printDog(Dog *dog) {  
   printf("The dog is %s and has %d legs\n", dog->firstname, dog->legs_number);  
}  
  
int main(int argc, char *argv[]) {  
  
   if (strcmp(argv[1], "-n") == 0) {  
      printf("New dog\nName: ");  
      printf("\nNumber of legs: ");  
      Dog newDog = {.firstname = argv[2], .legs_number = atoi(argv[3])};  
      printDog(&newDog);  
   } else if (strcmp(argv[1], "-s") == 0) {  
      printf("sizeof(Dog): %lu\n", sizeof(Dog));  
   } else {  
      Dog dog = {.firstname = argv[1], .legs_number = atoi(argv[2])};  
      printDog(&dog);  
   }  
}

For running PLX, Alice needs to run plx in the correct folder that contains the exercises if there is no ".plxproject" file in the given top-level folder the app provides a warning message.

home.opti.svg

Arrows on the picture illustrate the event. This is the home layout of the app PLX. There are three options on this page. First, press r to access the last exercise that still needs to be finished. When PLX starts an exercise, it will automatically open the IDE with the correct file and compile the file for the first time. Secondly, press l to access the listing of exercises, and lastly press ? to show the command of the app. So, Alice use PLX for the first time then she press l and enter in the list view.

list-1.opti.svg

On the list view, there are two columns:

  • the left one for the list of skills that doesn't change
  • the right side for the list of exercises for the current skill (the list immediately changes when she selects another skill).
  • Press Enter to go inside the skill and access the list of exercises and Esc to go back.

list-2.opti.svg

The meaning of colours for exercises: green = done, orange = one test pass, and default colour = otherwise. To navigate in the list, use k for going up and j for going down.

preview-exo.opti.svg

Press Enter to go to the exercise preview and Esc to go back to the exercise list. The preview exercise shows the instruction and the path of the main file, but doesn't run any build in the background to save resources. The preview is not another page, however the j and k shortcuts will continue to work and the preview will be adapted.

exo-1.opti.svg

Alice found the exercise she needs to resolve. Her IDE opened the C file corresponding to the exercise. The checks are green when they do not fail and red otherwise. The first failing check is automatically opened. To navigate and see details of the next checks use Ctrl+d down and Ctrl+u up. When Alice saves the exercise file on the IDE, PLX will automatically run compilation and checks, and update the results of the checks.

error.opti.svg

Alice makes some changes in is code to resolve check 2 but when she saves is file, PLX run the compilation and give the compilation error.

exo-2.opti.svg

When all checks are green the exo is done and Alice has the options to press s to see the solution. Scrolling inside the solution is with k (up) and j (down).

plx-xp.opti.svg

This workflow resume all different steps that Alice performs on PLX until she finish the exercise.

Landing page

You can find this project's landing page here

Technical choice

Why rust ?

Rust was chosen for our program for several key reasons :

  1. Performance: Rust is renowned for its high performance, comparable to languages like C and C++. This is crucial for our application, where speed and efficiency are paramount. 2. Memory safety: Rust's ownership model ensures memory safety. This reduces the chances of memory leaks and other related bugs making the application more robust.
  2. Memory Safety: One of Rust’s standout features is its ownership model, which enforces strict memory safety at compile time
  3. Concurrency: Rust's concurrency allows us to write safe concurrent code. This is particularly important for our application, as many modules within PLX require multithreading. Rust's concurrency model ensures that interactions between these modules are safe and efficient, leading to better performance and stability.Many moduls of PLX need threads and therefore we need to ensure that the concurrency in between those moduls are safe.
  4. Crates: Rust’s ecosystem is enriched by a vast collection of libraries, known as crates, which provide pre-built functionalities for various tasks.

Why ratatui ?

Ratatui provides a simple and intuitive API for building TUIs, which speeds up the developpement process. User side, it provides an ease of use rather than using terminal commands. Ratatui is also very flexible and offers a high degree of flexibility in designing widgets and layouts allowing us to create a clean, simple, and easy to use user interface that meets our specific needs.

Our TUI also needs to remain responsive. Ratatui is designed to be performant.

Why a TUI

TUIs are inherently lightweight and consume fewer system resources compared to GUIs. This makes them particularly suitable for environments with limited resources or where performance is a critical concern.

TUIs often rely on keyboard shortcuts and command-line inputs, which can be much faster for experienced users to navigate than using a mouse with a GUI. This can lead to increased productivity and a more streamlined workflow. In our case, PLX is made for practising and exercising programming language and therefore we decided to in a coding environnement and not a GUI.

Work process

Communication

  1. We have a Telegram group to have group calls, discuss and ask for reviews
  2. We do 2 small coordination meetings starting between 9:30 and 10:00, and another one around 15:00.

Versionning

We follow semver (Semantic Versionning), see the specification on semver.org. All versions under 1.0.0 are not to be considered stable, breaking changes can appear in the CLI arguments, keyboard shortcuts, file structure, exo syntax, ... internal Rust code is not exposed externally as it is not a library, so we don't have to consider major changes in the code.

Changelog

We follow the Keep a Changelog convention, we write users oriented changelog at each release to describe changes in a more accessible way that git log outputs between releases.

Commits

We try to follow the Conventionnal commits.

Contribution

We use a standard contribution workflow

  • An issue per feature/bug
  • A branch to implement a new change
  • A PR so the change can be reviewed before merging it to main

More info on how this contribution process is integrated into our CI/CD pipeline can be found here

CI/CD strategy

Most of the release process should be automated, this is why we configured GitHub actions to run different jobs.

PR validation strategy

  1. On each PR (and when new commits arrive) and on push on main, cargo build and cargo test are run to make sure everything is working
  2. On each git tag, we will run a CI job to test, build and run cargo publish to release PLX on crates.io

GitHub workflow

  1. We protect the main branch on the main repository to avoid pushing commits directly without any review. The 2 others repository (website + organisation profile) are not protected for ease of change.
  2. For each feature or change:
  3. we create a new issue and assign it to the correct person
  4. create a new branch,
  5. try to follow the conventionnal commits standard for writing commit messages,
  6. when done we send a PR.
  7. The PR is automatically merged only after one review, and trivial changes that do not review can be merged by the PR creator.
  8. Github is configured to block merging if CI jobs are failing.
  9. We try to delete the branch when PR is merged.

contributing

Release strategy

To release a new version of PLX, here are the manual steps:

  1. Create a new release branch
  2. Choose a new version number based following semantic version
  3. Modify the CHANGELOG.md to document changes since last release
  4. Modify the Cargo.toml with the chosen version
  5. Run cargo build to update the duplicated version number in Cargo.lock
  6. Push the changes
  7. Open and merge PR of this release branch (tests must pass so we cannot release code with compilation errors)

The CI release job starts and detect a version change (version in Cargo.toml different from the latest git tag) so the release process start

  1. Create a new tag with the extracted version
  2. Create a new release on Github with a link to the CHANGELOG.md
  3. Run cargo publish to publish plx on crates.io

The result is that running cargo install plx again will install the new version!

workflow

Features

We described a lot of details about the a better experience, problems of the current experience, but here is a complete central list of features we need to develop during PDG and some other ideas for later. Some of them will have a dedicated issue on GitHub, but this is easier to see the global picture and progress here.

For PDG

Functionnal requirements

FeatureStatusDescription
View the Home pageTODO
View the List pageTODOWith the list of skills and exos
C++ exo build+execution, but without configuration and without build folder visibleTODOSupport compiling C++ in single or multi files via Xmake without config in a separated build directory
Java exo build+execution, but without configuration and without build folder visibleTODOSupport compiling Java in single or multi files with javac
Exo creation with one main file or possibly more starting filesTODOSupport a way to describe those metadata and indicate which files are relevant.
Definition of automated check verifying outputsTODO
Run automated output checks on starting filesTODORun check, generate result and diff if it differs
Run automated output checks on solution filesTODOAdapt the compilation to build the solution files instead, and do the same things, after having checked the base files.
Execute a check on a given binary fileTODOCheck and display if exercices passed or failed
Show why checks are failingTODOShow why exercice failed (diff output / solution)
Start of the appTODOIt's possible to resume to the last exo or to the next logical one by pressing r.
Preview of exosTODOWhen searching for an exercice to do -> a preview of the exo with the metadata but do not run compilation.
Save and restore exos statesTODOSave exos states (done / not done / in progress) and restore it on subsequent app launches. Shows immediately the states in list with colors. Enable exo resuming.
Code Editor openingTODOOpen code editor when launching or switch to another exercice, the editor is defined via $EDITOR
Provide integrated documentationTODOPress ?to get an integrated documentation of all the keybinds available

TODO: continue this list

Non functionnal requirements

  1. It should be easy to create new exos and maintain them, converting an existing C++ exo should take less than 2 minutes.
  2. The watcher should be performant: it should only watch files that could be modified by the student or teacher, it should take less than 100ms to see detect a change.
  3. PLX should be usable during exo creation too to make sure the checks are passing on
  4. Once an exo is opened, with one IDE window at right and the terminal with PLX at left, the students should not need to open or move other windows and should be able to only Alt+Tab. All the automable steps should be automated to focus on learning tasks (including build, build configuration, running, output diffing, manual entries in terminal, detecting when to run, showing solution, switching to next exo).
  5. Switching to next exo should take less than 10 seconds. After this time: the IDE should be opened with the new file, and PLX should show the new exo details.
  6. Trivial exo files shoud not need any build configuration, PLX should be able to guess how to build the target with available files.
  7. Cross-plateform comptability meaning that PLX should work on all linux, windows and MAC machines.
  8. PLX shoud be designed in a modular way that allows for any easy addition of the features.
  9. Compiling an exercice should take less than (10 seconds).
  10. When saving a file, the compilation starts. If a compilation is already running when saving a file, it should kill the actual compilation and launch a new compilation.
  11. When launching the tests, if tests are already running they should be stopped and relaunched again.
  12. PLX must have a file watcher and file parser to be able to watch the edited file(s) and return it states. This is necessary to be able to flag the exercice (in progress, done, not started) and return errors descriptions or status (passed / failed) of the exercice.

For later

PDG is only 3 weeks but we already had some improvements or ideas for future development

FeatureStatusDescription
Review modeTODOA big feature to help clone all forks, pull them regularly, run all tests, review specific exos, flag some answers.
A global progress gridTODOTo easily view general progress in colors
"Lab/Single exo" modeTODOJust run a single exo definition without all the general project definition and lists around.
Concurrent execution of checksTODOIf a CLI has 10 checks and it takes 2 seconds to execute, running all tests one after the other will take 20 seconds ! If we could run them in 2.1 seconds that would be much better !!
Smart execution of testsTODOExecute first failed displayed check first, then continue with next failing tests, and finally with already passing tests. Invent other strategies to run them more efficiently.
Add a run command to PLXTODOEnable running an exo binary without digging into build folder. This could be useful for labs and PCO exos when we want to run the binary exo ourself to choose arguments or pipe into grep, without typing ../../build/mutex/mut1/exo
Import course and skills files from DY equivalentTODOCourse that use Delibay + PLX might want to easily setup those files.
TODO
TODO
TODO

Experience for teachers

Warning: this is experience has not been implemented for the moment... some of it will be post-PDG development period.

Introduction

From a first look at PLX, it seems the delightful experience only benefit students... It is true that the project conception mainly focuses on students because they are the learners, but we thought about teachers too. Here are the key points why you should consider PLX and how it could help you build better courses driven by practice and augmented by feedback for your students !

Enhance your exercises by easily adding automated checks

Instead of installing a test runner, configuring compilation, you can already cover some cases just by checking the output defining program arguments in various situations. Used to verify the common and edge cases.

Simplify or remove build configurations

For most cases, PLX can guess how to build the exo, you don't need to provide a Makefile or CMakeLists.txt, nor a pom.xml. You can customize the build via a xmake.lua if necessary, making it very easy to add a dependency like GoogleTest without requiring students to install it.

Simplify management of exo files and solutions

Quickly edit an exo from the list, use templates for faster exo redaction and run checks on starting file and solution file.

Enable general overview and easy review

If all your students fork a main repository and regularly push their answers, you can clone all those repositories and new opportunties for review could be possible. This is not supported either, but we could imagine a review mode where PLX could run all tests for all exos for all students! This would allow to generate a statistic grid to see the global progress. It enables the human review of each answer in a row to generate discussions and feedback in class.

Development documentation

TODO: insert TOC here

Introduction

The goal of this documentation is to make sure important decisions and non obvious information that is useful to new and existing contributors is documented. This is important is to make the project future-proof and maintainable.
If you want to make a non trivial contribution to the project, you really should read it most or some of the useful sections related to your problem, in addition to having a global overview of the features implemented. Knowing each module goal at least is important so you can see the different pieces fitting together.

TODO: write this at the end of PDG time once we have a good structure

Current architecture

TODO: add date TODO: add new schema TODO: explain the events system + async advantages +

Modules goals and useful details

Test suite

TODO: dump test suite pretty output here

UI

A Ratatui based UI that is as dumb as possible because this is the hardest part to test. Most of the code should be just definition of UI components. The UI read the core.ui_state enum to know which page and situation to display and renders the appropriate page at the rate of TODO.

  1. Keyboard shortcuts are just sent as event to the core, nothing is done with them directly. TODO: Exception of Q ?

TODO: Testing ?

Parser

TODO: Testing ?

The Parser collectively provide functionality for serializing and deserializing data to and from toml's files format. TIt convert TOML strings into Rust data structures and vice versa using the serde library. These functions handle the conversion by leveraging serde's DeserializeOwned and Serialize traits, ensuring that any compatible Rust type can be easily transformed to and from TOML format. This allows for efficient data interchange and configuration management in applications that use TOML for configuration files.

Compiler

TODO: Testing ?

Runner

TODO: Testing ?

Watcher

TODO: Testing ?

The FileWatcher is designed to monitor a specified directory for file modifications. It runs in a separate thread and uses channels to communicate events back to the main thread. The provided tests, verify its functionality by checking if the watcher correctly detects changes in both nested folders and the root of the watched directory. When a file is modified, the watcher sends an event through the channel, which the test then asserts to ensure the correct event (Event::FileSaved) is received.

Encountered issues

  1. Colors not visible in CI after using console crate: the test in CI run in a non interactive process (you can simulate this by running cargo test > output), so the colors were not applied so we couldn't compare it with expected "ANSI codes included output".
    #![allow(unused)]
    fn main() {
    // from diff.rs
    console::set_colors_enabled(true); // this is how we can force colors
    let old = "Hello\nWorld\n";
    let new = "Hello\nWorld Test\n";
    let diff = Diff::calculate_difference(old, new, None);
    let ansi = diff.to_ansi_colors(); // this is where the console create is called
    let expected_ansi = r" Hello
    -World
    +World Test
    ";
    }

Tips for testing

  1. Better debug in test output
    1. Debug view and pretty printing: println!("{:#?}", exo); and exo must have the Debug trait implemented, the easiest way to do it is with.
    2. Printing colors println!("{}", diff.to_ansi_colors()); via cargo test -- nocapture to not hide println during tests.

Logo design

In the spirit of Delibay and PRJS logos, a simple gradient was chosen on the project name. Like PRJS, the text is an ASCII art creation. It was generated with the help of Calligraphy with the font Blocky. The gradient is composed of these 2 colors: #fc1100, #ffb000 applied to this text piece.

████████  ██       ██     ██ 
██     ██ ██        ██   ██  
██     ██ ██         ██ ██   
████████  ██          ███    
██        ██         ██ ██   
██        ██        ██   ██  
██        ████████ ██     ██ 

It was colored in Inkscape with a monospace font (any monospace font should be okay) and exported in SVG:

logo of PLX

Contribution guide

WARNING: we are not open to contribution right now, this guide will be useful in the future or just for the PDG team.

Development

In addition to have installed the prerequisites in the Installation page, you have to

  1. Clone the repository git clone git@github.com:plx-pdg/plx.git
  2. Go into the plx folder
  3. Build the program
    cargo build
    # or in release mode
    cargo build --release
    
    You can find the result binary respectively in target/debug/plx and target/release/plx.
  4. And/or run it To run the binary without knowning its path just run cargo run.

Tips

  1. Install crate cargo-watch (cargo install cargo-watch) and run cargo watch -x run -c to rebuild and run cargo run and clear the screen between each execution. This provides a very convenient feedback loop.

Writing tests

Unit tests

TODO: when we know how to write them

Integration tests

TODO: when we know how to write them

UI testing

TODO: when we know how to write them

CI/CD strategy

Most of the release process should be automated, this is why we configured GitHub actions to run different jobs.

PR validation strategy

  1. On each PR (and when new commits arrive) and on push on main, cargo build and cargo test are run to make sure everything is working
  2. On each git tag, we will run a CI job to test, build and run cargo publish to release PLX on crates.io

todo: document release process todo: document other OS

Release strategy

To release a new version of PLX, here are the manual steps:

  1. Create a new release branch
  2. Choose a new version number based following semantic version
  3. Modify the CHANGELOG.md to document changes since last release
  4. Modify the Cargo.toml with the chosen version
  5. Run cargo build to update the duplicated version number in Cargo.lock
  6. Push the changes
  7. Open and merge PR of this release branch (tests must pass so we cannot release code with compilation errors)

The CI release job starts and detect a version change (version in Cargo.toml different from the latest git tag) so the release process start

  1. Create a new tag with the extracted version
  2. Create a new release on Github with a link to the CHANGELOG.md
  3. Run cargo publish to publish plx on crates.io

The result is that running cargo install plx again will install the new version!

Design

In this section of the documentation, we release brainstorming and in progress conceptions of current and future features. It could also serves as a way to easily share them with teachers and students to ask for feedbacks.

Exos management

This is a documentation driven design, this is the state of our research on the best exos management solution for the supported languages. This will probably be useful to future contributors who might want to understand why we made these decisions, help other *lings projects in their thinking,

Defining all toml

course.toml

To create a course we need a name and a selection of skills (chapters) in a specific order.

name = "Full fictive course"
skills = ["intro", "pointers", "parsing", "structs", "enums"]

skill.toml

To create a new skill, we need a name and a list of all the names of each exercices in a specific order.

name = 'Introduction'
exos = ['basic-args', 'basic-output']

exo.toml

For each exercice we have :

  • a name
  • the instructions
  • all the needed checks to be done
name = 'Basic arguments usage'
instruction = 'The 2 first program arguments are the firstname and number of legs of a dog. Print a full sentence about the dog. Make sure there is at least 2 arguments, print an error if not.'
[[checks]]
name = 'Joe + 5 legs'
args = ["Joe", "5"]
test = {type= "output", expected = "The dog is Joe and has 5 legs" }
[[checks]]
name = 'No arg -> error'
test = {type= "output", expected = "Error: missing argument firstname and legs number"}
[[checks]]
name = 'One arg -> error'
args = ["Joe"]
test = {type = "output", expected = "Error: missing argument firstname and legs number"}

Generated toml

.course-state.toml

This state is used to save the skill and exercise index for resume.

curr_skill_idx = 0
curr_exo_idx = 0

.exo-state.toml

For each exercice, there is a state that can be in-progress, done or not done. There is also an optional favorite option to place the exercice into our personal selection.

state = "InProgress"
favorite = false

Defining "the best"

We believe the best structure will lead to the following qualities:

  1. Speed to create new exos and editing existing ones
  2. Ease of learning the structure and synatx to create and maintain exos
  3. Ease of developing a solution and adding checks without always writing unit tests with test runner

This can be measured with the following speed metrics:

  1. Converting an existing C++ exo should take less than 2 minutes.
  2. Converting "here is the expected output" to a PLX check should take less than 15seconds

Goal

  1. Define a folder and file structure to define a single exo and an entire course with all exos grouped by skills (sometimes we could just call them chapters).
  2. Define the syntax used to define the various exo metadata
  3. Define build configuration files and build hints if necessary
  4. Show different examples for various exo types in the 3 languages
  5. Define templates used to quickly fill new exos and how to use them

Research on similar tools

Existing projects like rustlings and inspired projects like haskellings, cplings, ziglings, and even PRJS has opinions on their structure and their metadata system. Each language have its own constraints, build system, ease of testing, and testing tools integration... but there is probably new ideas and things to inspire from these tools. In addition to reading the contributing guides, looking at GitHub issues from new contributors on these repositories could also indicate advantages or flaws in the structure that can help us decide which direction to take.

Rustlings

Website: rustlings.cool - CONTRIBUTING.md - Third party exos (outside of Rustlings repos)

PRJS

Website: Repos samuelroland/prjs - Exo management: exos.md

Cplings

Website: Repos rdjondo/cplings - Exo management: exos.md

Golings

Website: Repos mauricioabreu/golings - CONTIBUTING.md

Ziglings

Build system design

Design brainstorming on

How to structure a system to build C, C++ and Java, to support various dependencies and reduce/remove manual configurations to a minimum ?
How to make the build cache mostly invisible ?

The structure and formats is not ready yet, but it will look something like Delibay (course repos + course info + skills list + exos list per skills) and one folder per exo.

cppexos # exos repository, C++ as example here
  course.toml # define your course info
  skills.toml # define your skills info and order
  structs # first skill
    exos.toml # exos definition in this skill
    mega-dog # first exo in this skill
      dog.cpp
    unit-dog # second exo with unit tests in GoogleTest
      dog.cpp
  pointers # second skill
    exos.toml
    on-functions # first exo in this skill with 3 files
      libparse.c
      libparse.h
      main.cpp
    debug # second exo in this skill with one file
      crash.cpp

Build challenges

  1. Problem 1: How to detect project type ?
    1. How to avoid the need to define the compilation type ? How to guess it instead ?
  2. Problem 2: How to manage trivial and non trivial build situations ?
    1. For trivial situations, how should we name the target ?
    2. How to define some dependencies like GoogleTest, Unity or other test runners or libraries
    3. How to install those dependencies ? How to accept xmake prompt to install ?
  3. Problem 3: How to make build system almost invisible ?
    1. Having 3 additional elements per exo folder is too much noise (xmake.lua, .xmake and build), it would better to have most only build related elements at root of the repository
    2. We cannot have a common flat build folder for all exos, because it will create strange errors. We cannot trash it before each exo, because we would loose the big speed improvement of build cache when running all exos or running an exo done in the past.
    3. How to differentiate generated and manually written build configurations ?

Build strategies

Solution to problem 1: Using existing configuration or guess how to build trivial cases

  1. Define ExoType = None
  2. If the exo folder contains a xmake.lua, it is used to build: ExoType = xmake
  3. Otherwise, if it contains any .c or .cpp file: ExoType = xmake
  4. Otherwise, if it contains any .java, using javac strategy: ExoType = java

If ExoType == None at this point, throw an error because this is not possible to build this exo...

Solution to problem 2:

  • If ExoType == java, only trivial situations are supported: with javac directly, running the following command javac -d path/to/build/folder Main.java. The main file should be named Main.java. It will be ran with java -classpath path/to/build/folder/ Main.

  • If ExoType == xmake and there is no existing xmake.lua, we create it on the fly (only the first comment will be included)

    -- Autogenerated by PLX based on guess: xmake + cpp + c. Do not edit directly.
    target("exo")
    add_files("**.cpp") -- add this line only when .cpp files have been detected
    add_files("**.c") -- add this line only when .c files have been detected
    -- it's possible to have both C and C++ at the same time but only one `main()` function
    
  • In case, we define in exo metadata external libraries or xmake packages (here pcosynchro as library and gtest as package), we dynamically generate xmake instructions add_requires+add_packages and add_links.

    add_requires("gtest")
    
    target("exo")
    add_files("**.cpp")
    add_packages("gtest")
    add_links("pcosynchro")
    
  • If this is not possible to solve the build situation with the above possilities, the teacher needs to create xmake.lua by hand. The target must be also be named exo so PLX can detect and run it via xmake run exo.

Solution to problem 3: Group all build folders in a single folder build at root of repos:

  • Xmake can use a specific config build folder and xmake.lua, javac support a custom output folder, so we make build files almost invisible. This also has the advantage of removing ambiguity on if a xmake.lua has been written by a teacher or has been dynamically generated, as they would be located in different folders: the dynamic config inside the build/... structure, the hand written one in the exo folder.

Global overview

Here is an overview example, considering the previous structure but this time including build files

cppexos
  course.toml
  skills.toml
  structs
    exos.toml
    mega-dog
      dog.cpp
    unit-dog
      dog.cpp
      xmake.lua # special case with need of hand written config
  pointers
    exos.toml
    on-functions
      libparse.c
      libparse.h
      main.cpp
    debug
      crash.cpp

  ...

  .gitignore # must contains "build" folder
  build # common build folder, same structure as above inside,
        # but with build config and cache instead of code
    structs
      mega-dog
        xmake.lua # dynamically generated config file
        ## Generated by xmake for this specific exo
        .xmake
        build
    pointers
      on-functions
        xmake.lua # dynamically generated config file
        ## Generated by xmake for this specific exo
        .xmake
        build
          ...

Xmake example in C++: Let's say we are doing the structs/mega-dog exo editing dog.cpp, here are the steps behind the scenes

  1. Compilation
    1. We detect this exo has no existing xmake.lua but has .cpp files so the exo type is xmake.
    2. Intermediate folders are created for path build/structs/dog if necessary
    3. The trivial config is created under build/structs/dog/xmake.lua
      -- Autogenerated by PLX based on guess: xmake + cpp. Do not edit directly.
      target("exo")
      add_files("**.cpp")
      
    4. PLX runs the following command xmake build -F build/structs/dog/xmake.lua -P structs/dog/ to indicate source file and build config file.
  2. Execution
    1. PLX runs the following command xmake run exo -F build/structs/dog/xmake.lua

Java example: Let's say we are doing the try-catch/except-me exo editing Main.java, here are the steps behind the scenes

cppexos
  course.toml
  skills.toml
  try-catch
    exos.toml
    except-me
      Main.java
      Person.java
      Party.java
  build
    try-catch
      except-me
        Main.class # generated by javac
  1. Compilation
    1. We detect this exo contain *.java files so the exo type is java.
    2. Intermediate folders are created for path build/try-catch/except-me/ if necessary
    3. PLX runs javac -d build/try-catch/except-me/ Main.java
  2. Execution
    1. PLX runs java -classpath build/try-catch/except-me/ Main

We recommend to read this page with nice preview on the deployed version on plx.rs

logo of PLX

Why

The classic way of doing programming exercises is full of friction that slows down the progress, and creates distraction from learning. PLX is here to redefine the experience, based on deliberate practice principles, because IT students really deserve it.

Practice programming exos in delightful Learning eXperience

Whether you are learning C, C++ or Java, PLX can bring you

  1. 🔁 A feedback loop as short as possible
    The compilation and checks run immediately after file changes, removing the need and time to click the Play button or finding the build command and target file.
  2. 💯 100% focus on the learning tasks
    Writing code, fixing compilation errors, making checks pass and refactor. All other administrative steps are automated, reducing feedback time and removing some mental overhead !
  3. Various kind of automated checks and rich results
    Mentally comparing 20 lines of output with the expected output is a thing of the past! The output should already be diffed to highlight differences in colors.

Image of the interface of PLX when looking at tests results This is PLX during a small C exo, at the left of the IDE opened on dog.c.

The classic experience

If you are not helped by any tool except your IDE (no existing file, no test, no runner, no watch mode), and your compile+run the exo manually in the terminal, here is the view of the workflow.

Image of the classic experience of programming a small CLI

All steps (the bubbles) are necessary actions to be done manually by students, the blue ones are those that could be completely/partially automated.

The PLX experience

In the same context, running an exo with PLX looks like this: no more blue steps, faster process and almost zero friction!

Image of the PLX experience of programming a small CLI Here we consider the Compare output step to not exist because PLX shows a nice words level diff of the output compared to the expected one, enabling instant understanding of the issue.

Helping students

As you can imagine PLX can have a big impact on the speed and the flow of training on programming problems, finally leading to better and more efficient practice. But it could also help for labs !

Already developed a battleship in command line ? Or any kind of game with user inputs and change of boards ? How can you make sure all scenarios do work ? You can either test it one by one at the end or regularly, but that's boring... What if you could describe your scenarios with expected outputs and inputs to enter to validate the whole game in various situations ?

Helping teachers

If you are teaching a course related to C, C++ or Java, PLX can help you to

  1. ✅ Enhance your exercises by easily adding automated checks
  2. 🏗️ Simplify or remove build configurations
  3. ⌨️ Simplify management of exo files and solutions
  4. 📊 Enable general overview and easy review

Want to know more ? See Experience for teachers in the docs.

Course management

We have designed a ...

WHY ? - Git repository of PLX - Git repository of this website - Development documentation