Blog

  • .dotfiles

    .DOTFILES

    Just my .dotfiles

    These .dotfiles are used (mainly) on a Dell XPS 9560 with Fedora 30 and (minus ansible) a Thinkpad P51 with Ubuntu 19.04.

    Configured softwares

    This repository contains my configurations for the following softwares:

    • i3
    • rofi
    • git
    • zsh

    Automate all the things

    For now, ansible is used to install a new desktop from scratch. The roles will install all the softwares I need, init all my projects repositories and copy my important datas on the device.

    This is the process I use to install a new device:

    1. Install a new distribution manually
    2. Mount my encrypted back-up to /tmp/decrypted_lutices
    3. Manually copy my .ssh keys and add it to the agent
    4. Then run
    sudo dnf install git ansible
    git clone http://github.com/AmarOk1412/.dotfiles
    cd .dotfiles
    ansible-playbook playbook.yml -K --extra-vars "ldap=<my_ldap>"
    1. Drink maté

    For the server:

    ansible-playbook playbook_enconn.yml -u amarok -i hosts --tags=server
    

    Lutices

    Lutices is the name of my luks2 encrypted back-up.

    This is the minimal internal structure of my back-ups:

    .
    ├── .key
    │   └── EEB2A9A9.key
    ├── .password-store
    ├── Pictures
    │   ├── avatars
    │   └── wallpapers
    ├── .ssh
    ├── .thunderbird
    └── TODOLists
    
    • .ssh will populate .ssh
    • .key will populate gnupg2
    • .password-store will populate pass
    • .thunderbird will populate thunderbird
    • The other directories contain my minimal datas.
    Visit original content creator repository https://github.com/AmarOk1412/.dotfiles
  • parser


    PHP 8.1+ Latest Stable Version Latest Unstable Version License MIT MetaStorm

    Reference implementation for TypeLang Parser.

    TypeLang is a declarative type language inspired by static analyzers like PHPStan and Psalm.

    Read documentation pages for more information.

    Installation

    TypeLang Parser is available as Composer repository and can be installed using the following command in a root of your project:

    composer require type-lang/parser

    Quick Start

    $parser = new \TypeLang\Parser\Parser();
    
    $type = $parser->parse(<<<'PHP'
        array{
            key: callable(Example, int): mixed,
            ...
        }
        PHP);
    
    var_dump($type);

    Expected Output:

    TypeLang\Parser\Node\Stmt\NamedTypeNode {
      +offset: 0
      +name: TypeLang\Parser\Node\Name {
        +offset: 0
        -parts: array:1 [
          0 => TypeLang\Parser\Node\Identifier {
            +offset: 0
            +value: "array"
          }
        ]
      }
      +arguments: null
      +fields: TypeLang\Parser\Node\Stmt\Shape\FieldsListNode {
        +offset: 11
        +items: array:1 [
          0 => TypeLang\Parser\Node\Stmt\Shape\NamedFieldNode {
            +offset: 11
            +type: TypeLang\Parser\Node\Stmt\CallableTypeNode {
              +offset: 16
              +name: TypeLang\Parser\Node\Name {
                +offset: 16
                -parts: array:1 [
                  0 => TypeLang\Parser\Node\Identifier {
                    +offset: 16
                    +value: "callable"
                  }
                ]
              }
              +parameters: TypeLang\Parser\Node\Stmt\Callable\ParametersListNode {
                +offset: 25
                +items: array:2 [
                  0 => TypeLang\Parser\Node\Stmt\Callable\ParameterNode {
                    +offset: 25
                    +type: TypeLang\Parser\Node\Stmt\NamedTypeNode {
                      +offset: 25
                      +name: TypeLang\Parser\Node\Name {
                        +offset: 25
                        -parts: array:1 [
                          0 => TypeLang\Parser\Node\Identifier {
                            +offset: 25
                            +value: "Example"
                          }
                        ]
                      }
                      +arguments: null
                      +fields: null
                    }
                    +name: null
                    +output: false
                    +variadic: false
                    +optional: false
                  }
                  1 => TypeLang\Parser\Node\Stmt\Callable\ParameterNode {
                    +offset: 34
                    +type: TypeLang\Parser\Node\Stmt\NamedTypeNode {
                      +offset: 34
                      +name: TypeLang\Parser\Node\Name {
                        +offset: 34
                        -parts: array:1 [
                          0 => TypeLang\Parser\Node\Identifier {
                            +offset: 34
                            +value: "int"
                          }
                        ]
                      }
                      +arguments: null
                      +fields: null
                    }
                    +name: null
                    +output: false
                    +variadic: false
                    +optional: false
                  }
                ]
              }
              +type: TypeLang\Parser\Node\Stmt\NamedTypeNode {
                +offset: 40
                +name: TypeLang\Parser\Node\Name {
                  +offset: 40
                  -parts: array:1 [
                    0 => TypeLang\Parser\Node\Identifier {
                      +offset: 40
                      +value: "mixed"
                    }
                  ]
                }
                +arguments: null
                +fields: null
              }
            }
            +optional: false
            +key: TypeLang\Parser\Node\Identifier {
              +offset: 11
              +value: "key"
            }
          }
        ]
        +sealed: false
      }
    }
    Visit original content creator repository https://github.com/php-type-language/parser
  • fxserver

    Logo

    fxserver

    Setup cloud as a VFX server.

       Last Commit      GitHub Stars  

    Table of Contents

    About

    Quick tutorial to setup a Cloud Server for multiple machines access, and VFX Pipeline on Windows, macOS and Linux. This repository is based on Google Drive VFX Server, with loads of improvements.

    Setup Server

    First, you’ll need to mount your Cloud server on your system, using any software you like (rclone, Google Drive File Stream, etc.)

    We can then start moving files around. The setup only relies on environment variables:

    • SERVER_ROOT: The root of the mounted Cloud server. This is the only value that needs to be changed depending on your setup
    • CONFIG_ROOT: The .config folder
    • ENVIRONMENT_ROOT: the .config/environment folder
    • PIPELINE_ROOT: the .config/pipeline folder

    You can now download the code from this repository and extract its content to your SERVER_ROOT. Using Z:/My Drive as the mounted Cloud server path, it should look like this:

    .
    └── 📁 Z:/My Drive/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline

    Which equals to:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 $CONFIG_ROOT/
            ├── 📁 $ENVIRONMENT_ROOT
            └── 📁 $PIPELINE_ROOT

    You will need to modify SERVER_ROOT in .zshrc (Unix) and/or dcc.bat (Windows) by your mounted Cloud server path:

    • In .zshrc: export SERVER_ROOT="Path/to/drive/linux" (Line 12, 17, 21)
    • In dcc.bat: setx SERVER_ROOT "Path\to\drive\windows" (Line 9)

    Once the folder structure is created and the SERVER_ROOT value has been modified, you can now assign the environment variables:

    Windows

    Windows supports shell scripting after some manipulations but it’s way easier to “hard” write the environment variables by running dcc.bat.

    dcc.bat

    To check that everything is working:

    • Type Win + I to open the Windows Settings
    • Scroll to the bottom of the page and click About
    • Navigate to Device Specifications and press Advanced System Settings
    • In the System Properties dialogue box, hit Environmental Variables
    • The freshly created variables should be under User
    • Check is SERVER_ROOT has been defined with the right path

    Unix

    macOS and Linux are both Unix based OS. The simplest way is to migrate your shell to Zsh using chsh -s $(which zsh) in your terminal. You can then symlink .zshrc in your $HOME folder. To check that everything is working, restart your terminal and type echo $SERVER_ROOT: it should output your mounted Cloud server path.

    Warning

    .zshrc needs to be called exactly that way in $HOME to be picked up by the terminal: remove any alias or symlink added in the name.

    Warning

    The Make Alias command in macOS Finder won’t work properly. You should use this service instead to create proper Symlinks: Symbolic Linker

    Software

    This setup automatically links the following DCCs, using this folder structure:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                ├── 📁 houdini               ──> Using $HSITE
                ├── 📁 maya                  ──> Using $MAYA_APP_DIR
                ├── 📁 nuke                  ──> Using $NUKE_PATH
                ├── 📁 other
                └── 📁 substance_painter
                    └── 📁 python            ──> Using $SUBSTANCE_PAINTER_PLUGINS_PATH

    The DDCs can be launched normally on Windows if the dcc.bat file has been used to define the environment variables.

    For macOS and Linux, you should start them from a terminal, in order to inherit the environment variables defined by .zshrc.

    You can find an example script for Houdini just here: houdini.sh.

    To access it quickly, we also defined an alias for houdini pointing to that script in aliases.sh. It will allow you to simply type this command to launch Houdini.

    Maya Maya

    WIP

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 maya/
                    └── 📁 2023/
                        ├── 📄 Maya.env
                        ├── 📁 prefs
                        ├── 📁 presets
                        └── 📁 scripts

    Substance Substance Painter

    WIP

    Note
    See Substance Painter environment variables

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 substance_painter/
                    └── 📁 python/
                        └── 📄 plugin.py

    Houdini Houdini

    Houdini will automatically scan the folder defined by $HSITE for any folder being named houdini<houdini version>/<recognized folder> such as otls or packages and load the content of those folders at Houdini startup.

    You can find two package file examples:

    Both taking advantage of the environment variables posteriorly defined.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 houdini/
                    └── 📁 houdini19.5/
                        ├── 📁 desktop
                        ├── 📁 otls/
                        │   └── 📄 digital_asset.hda
                        └── 📁 packages/
                            └── 📄 package.json

    Nuke Nuke

    Nuke will scan the content of the folder defined by NUKE_PATH, searching for init.py and menu.py.

    You can find an init.py file example, showing how to load plugins on Nuke startup.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 nuke/
                    ├── 📄 init.py
                    └── 📄 menu.py

    Useful Resources and Tools

    Contact

    Project Link: Cloud VFX Server

    GitHub   LinkedIn   Behance   Twitter   Instagram   Gumroad   Email   Buy Me A Coffee  

    Visit original content creator repository https://github.com/healkeiser/fxserver
  • fxserver

    Logo

    fxserver

    Setup cloud as a VFX server.

       Last Commit      GitHub Stars  

    Table of Contents

    About

    Quick tutorial to setup a Cloud Server for multiple machines access, and VFX Pipeline on Windows, macOS and Linux. This repository is based on Google Drive VFX Server, with loads of improvements.

    Setup Server

    First, you’ll need to mount your Cloud server on your system, using any software you like (rclone, Google Drive File Stream, etc.)

    We can then start moving files around. The setup only relies on environment variables:

    • SERVER_ROOT: The root of the mounted Cloud server. This is the only value that needs to be changed depending on your setup
    • CONFIG_ROOT: The .config folder
    • ENVIRONMENT_ROOT: the .config/environment folder
    • PIPELINE_ROOT: the .config/pipeline folder

    You can now download the code from this repository and extract its content to your SERVER_ROOT. Using Z:/My Drive as the mounted Cloud server path, it should look like this:

    .
    └── 📁 Z:/My Drive/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline

    Which equals to:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 $CONFIG_ROOT/
            ├── 📁 $ENVIRONMENT_ROOT
            └── 📁 $PIPELINE_ROOT

    You will need to modify SERVER_ROOT in .zshrc (Unix) and/or dcc.bat (Windows) by your mounted Cloud server path:

    • In .zshrc: export SERVER_ROOT="Path/to/drive/linux" (Line 12, 17, 21)
    • In dcc.bat: setx SERVER_ROOT "Path\to\drive\windows" (Line 9)

    Once the folder structure is created and the SERVER_ROOT value has been modified, you can now assign the environment variables:

    Windows

    Windows supports shell scripting after some manipulations but it’s way easier to “hard” write the environment variables by running dcc.bat.

    dcc.bat

    To check that everything is working:

    • Type Win + I to open the Windows Settings
    • Scroll to the bottom of the page and click About
    • Navigate to Device Specifications and press Advanced System Settings
    • In the System Properties dialogue box, hit Environmental Variables
    • The freshly created variables should be under User
    • Check is SERVER_ROOT has been defined with the right path

    Unix

    macOS and Linux are both Unix based OS. The simplest way is to migrate your shell to Zsh using chsh -s $(which zsh) in your terminal. You can then symlink .zshrc in your $HOME folder. To check that everything is working, restart your terminal and type echo $SERVER_ROOT: it should output your mounted Cloud server path.

    Warning

    .zshrc needs to be called exactly that way in $HOME to be picked up by the terminal: remove any alias or symlink added in the name.

    Warning

    The Make Alias command in macOS Finder won’t work properly. You should use this service instead to create proper Symlinks: Symbolic Linker

    Software

    This setup automatically links the following DCCs, using this folder structure:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                ├── 📁 houdini               ──> Using $HSITE
                ├── 📁 maya                  ──> Using $MAYA_APP_DIR
                ├── 📁 nuke                  ──> Using $NUKE_PATH
                ├── 📁 other
                └── 📁 substance_painter
                    └── 📁 python            ──> Using $SUBSTANCE_PAINTER_PLUGINS_PATH

    The DDCs can be launched normally on Windows if the dcc.bat file has been used to define the environment variables.

    For macOS and Linux, you should start them from a terminal, in order to inherit the environment variables defined by .zshrc.

    You can find an example script for Houdini just here: houdini.sh.

    To access it quickly, we also defined an alias for houdini pointing to that script in aliases.sh. It will allow you to simply type this command to launch Houdini.

    Maya Maya

    WIP

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 maya/
                    └── 📁 2023/
                        ├── 📄 Maya.env
                        ├── 📁 prefs
                        ├── 📁 presets
                        └── 📁 scripts

    Substance Substance Painter

    WIP

    Note
    See Substance Painter environment variables

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 substance_painter/
                    └── 📁 python/
                        └── 📄 plugin.py

    Houdini Houdini

    Houdini will automatically scan the folder defined by $HSITE for any folder being named houdini<houdini version>/<recognized folder> such as otls or packages and load the content of those folders at Houdini startup.

    You can find two package file examples:

    Both taking advantage of the environment variables posteriorly defined.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 houdini/
                    └── 📁 houdini19.5/
                        ├── 📁 desktop
                        ├── 📁 otls/
                        │   └── 📄 digital_asset.hda
                        └── 📁 packages/
                            └── 📄 package.json

    Nuke Nuke

    Nuke will scan the content of the folder defined by NUKE_PATH, searching for init.py and menu.py.

    You can find an init.py file example, showing how to load plugins on Nuke startup.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 nuke/
                    ├── 📄 init.py
                    └── 📄 menu.py

    Useful Resources and Tools

    Contact

    Project Link: Cloud VFX Server

    GitHub   LinkedIn   Behance   Twitter   Instagram   Gumroad   Email   Buy Me A Coffee  

    Visit original content creator repository https://github.com/healkeiser/fxserver
  • chem-fight

    Visit original content creator repository
    https://github.com/js13kGames/chem-fight

  • jME-TTF

    Visit original content creator repository
    https://github.com/stephengold/jME-TTF

  • karaf-jaxrs-whiteboard-security

    karaf-jaxrs-whiteboard-security

    Karaf OSGi JAX-RS Whiteboard Security

    Building from sources

    Build mvn clean install

    Deployment in Karaf

    Run Karaf

    ./bin/karaf

    Run from root Karaf floder, not from ./bin folder! See details in https://karaf.apache.org/get-started.html

    Deploy OSGi JAX-RS Whiteboard Security server to Karaf

    Before installation you should build server from sources! (Its due to Karaf installs all from local maven repository by default.)

    Add feature repository

    • feature:repo-add mvn:ru.agentlab.security/ru.agentlab.security.feature/LATEST/xml

    Install karaf features and activate OSGi bundles

    Install main feature (installs all sub-features except cors plugin):

    • feature:install ru.agentlab.security.deploy

    Install main feature (installs all sub-features with cors plugin):

    • feature:install ru.agentlab.security.cors.deploy

    Or you colud install sub-features one by one:

    • feature:install agentlab-aries-jax-rs-whiteboard-jackson
    • feature:install nimbus-oauth-sdk
    • feature:install ru.agentlab.security.deps
    • feature:install ru.agentlab.security.deploy
    • feature:install ru.agentlab.security.cors.deploy – optional

    Development

    • bundle:watch * — Karaf should monitor local maven repository and redeploy rebuilded bundles automatically

    • bundle:list и la — list all plugins

    • feature:list — list all features

    • display — show logs

    • log:set DEBUG — set logger filter into detailed mode

    • ./bin/karaf debug — allows to attach with debugger on 5005 port

    Visit original content creator repository
    https://github.com/agentlab/karaf-jaxrs-whiteboard-security

  • otp_passage

    otp_passage

    hex.pm version Build Status Code Coverage License: MIT

    OpenTracing instrumentation library for the Erlang/OTP standard modules.

    This uses passage as OpenTracing API library.

    Documentation

    A Running Example

    This example uses following simple echo server.

    -module(echo_server).
    
    -compile({parse_transform, passage_transform}). % Enables `passage_trace` attribute
    
    -behaviour(gen_server).
    
    -export([start_link/0, echo/1]).
    -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
    
    %% Exported Functions
    start_link() ->
        %% Uses `gen_server_passage` instead of `gen_server`
        gen_server_passage:start_link({local, ?MODULE}, ?MODULE, [], []).
    
    echo(Message) ->
        %% Uses `gen_server_passage` instead of `gen_server`
        gen_server_passage:call(?MODULE, {echo, Message}).
    
    %% `gen_server` Callbacks
    init(_) -> {ok, []}.
    
    handle_call({echo, Message}, _, State) ->
      log(Message),
      {reply, Message, State}.
    
    handle_cast(_, State) -> {noreply, State}.
    handle_info(_, State) -> {noreply, State}.
    terminate(_, _) -> ok.
    code_change(_, State, _) -> {ok, State}.
    
    %% Internal Functions
    -passage_trace([]). % Annotation for tracing.
    log(Message) ->
      io:format("Received: ~p\n", [Message]),
      ok.

    By using jaeger and jaeger_passage, you can easily trace and visualize the behaviour of the above server.

    As the first step, you need to start jaeger daemons.

    $ docker run -d -p6831:6831/udp -p6832:6832/udp -p16686:16686 jaegertracing/all-in-one:latest

    Next, start Erlang shell and execute following commands.

    % Starts `example_tracer`
    > Sampler = passage_sampler_all:new().
    > ok = jaeger_passage:start_tracer(example_tracer, Sampler).
    
    % Starts `echo_server`
    > {ok, _} = echo_server:start_link().
    
    % Traces an echo request
    > passage_pd:with_span(echo, [{tracer, example_tracer}],
                           fun () -> echo_server:echo(hello) end).

    You can see the tracing result by your favorite browser (in this example, firefox).

    $ firefox http://localhost:16686/

    Jaeger UI

    Visit original content creator repository https://github.com/sile/otp_passage
  • Katas

    Katas

    Here are 27 solved katas; each in a different language.

    Sources: CodingDojo, Ruby Quiz, and
    CodeKata.

    • Language: Ruby
    • Solution: KataBankOCR.rb
    • Tests: KataBankOCR_test.rb
    • Language: Java
    • Solution: KataFizzBuzz.java
    • Tests: KataFizzBuzzTest.java; see junit.sh.
    • Language: Python
    • Solution: KataPotter.py
    • Tests: KataPotter_test.py
    • Remark: It needs more tests to know if it’s really solved
    • Language: Bash
    • Solution: KataRomanNumerals.sh
    • Tests: KataRomanNumerals_test.sh; see also assert.sh.
    • Language: JavaScript
    • Solution: KataRomanCalculator.js
    • Tests: KataRomanCalculator_tests.js
    • Language: PHP
    • Solution: KataNumberToLCD.php
    • Tests: KataNumberToLCD_tests.php
    • Language: C
    • Solution: KataTennis.c
    • Tests: KataTennis_tests.c
    • Language: Scala
    • Solution: KataBowling.scala
    • Tests: KataBowlingTest.scala using
      ScalaTest 1.7.1; see KataBowling_tests.sh for
      command-line shortcuts.
    • Language: CoffeeScript
    • Solution: KataPokerHands.coffee
    • Tests: KataPokerHands_tests.coffee using
      jasmine-node
    • Language: Io
    • Solution: KataMinesweeper.io
    • Tests: KataMinesweeper_tests.io
    • Language: Lisp
    • Solution: KataKarateChop.lisp
    • Tests: KataKarateChop_tests.lisp
    • Language: Perl
    • Solution: KataReversi.pl
    • Tests: KataReversi_tests.pl
    • Language: Groovy
    • Solution: KataGameOfLife.groovy
    • Tests: KataGameOfLife_tests.groovy; see also junit_gameoflife.sh.
    • Language: Smalltalk
    • Solution: KataSecretSantas.st
    • Tests: KataSecretSantas_tests.st; see also gst_tests.sh.
    • Language: C++
    • Solution: KataWordWrap.cpp
    • Tests: KataWordWrap_tests.cpp using
      CppUnit
    • Language: Forth
    • Solution: KataDiversion.fth
    • Tests: KataDiversion_tests.fth
    • Language: Lua
    • Solution: KataAnimalQuiz.lua
    • Tests: KataAnimalQuiz_tests.lua using
      lunit

    This is a slightly modified version of the
    RubyQuiz #54 that doesn’t use a bits index.

    • Language: OCaml
    • Solution: kataWordQuery.ml
    • Tests: kataWordQueryTests.ml using
      OUnit; see also kataWordQueryTests.sh.
    • Language: Erlang
    • Solution: katacheckout.erl
    • Tests: katacheckout_tests.erl using
      EUnit; see also
      katacheckout_tests.sh.
    • Language: Go
    • Solution: katadependencies.go
    • Tests: katadependencies_test.go
    • Language: Clojure
    • Solution: src/kata_trigrams/core.clj; use lein run generate f1.txt f2.json
      to index f1.txt into f2.json, then lein run generate f2.json 42 to
      generate 42 random words from the file f2.json
    • Tests: test/kata_trigrams/test/*.clj; use lein test.
    • Language: Rust (0.9)
    • Solution: kata_english_words.rs
    • Tests: kata_english_words_tests.rs; compile and run with make.
    • Language: Crystal (0.4.3)
    • Solution: kata_word_chains.cr
    • Tests: kata_word_chains_test.cr
    • Language: Commodore BASIC
    • Solution: kata_sort_chars.bas and a homemade crunched version,
      kata_sort_chars.crunch.bas
    • Tests: kata_sort_chars_tests.sh
    • Language: Prolog
    • Solution: kata_change.pl and kata_change_cli.pl. Use make then
      ./kata_change <sum>.
    • Tests: kata_change_tests.pl
    • Language: awk
    • Solution: ./code_cracker.awk -v key=<key>. It reads (and prints) one
      message per line.
    • Tests: ./code_cracker_tests.sh
    • Language: Julia
    • Solution: ./parse_id3.jl <file1.mp3> [...]
    • Tests: ./parse_id3_test.jl

    Visit original content creator repository
    https://github.com/bfontaine/Katas

  • CT-ADE

    CT-ADE

    CT-ADE: An Evaluation Benchmark for Adverse Drug Event Prediction from Clinical Trial Results

    Citation

    @article{yazdani2025evaluation,
      title={An Evaluation Benchmark for Adverse Drug Event Prediction from Clinical Trial Results},
      author={Yazdani, Anthony and Bornet, Alban and Khlebnikov, Philipp and Zhang, Boya and Rouhizadeh, Hossein and Amini, Poorya and Teodoro, Douglas},
      journal={Scientific Data},
      volume={12},
      number={1},
      pages={1--12},
      year={2025},
      publisher={Nature Publishing Group}
    }
    

    Developed with

    • Operating System: Ubuntu 22.04.3 LTS
      • Kernel: Linux 4.18.0-513.18.1.el8_9.x86_64
      • Architecture: x86_64
    • Python:
      • 3.10.12

    Prerequisites

    1. Set up your environment and install the necessary Python libraries as specified in requirements.txt. Note that you will need to install the development versions of certain libraries from their respective Git repositories.
    2. Place your unzipped MedDRA files in the directory ./data/MedDRA_25_0_English and your DrugBank XML database in the directory ./data/drugbank.

    Ensure you clone and install the following libraries directly from their Git repositories for the development versions:

    Repository Structure

    .
    ├── a0_download_clinical_trials.py
    ├── a1_extract_completed_or_terminated_interventional_results_clinical_trials.py
    ├── a2_extract_and_preprocess_monopharmacy_clinical_trials.py
    ├── b0_download_pubchem_cids.py
    ├── b1_download_pubchem_cid_details.py
    ├── c0_extract_drugbank_dbid_details.py
    ├── d0_extract_chembl_approved_CHEMBL_details.py
    ├── data
    │   ├── MedDRA_25_0_English
    │   │   └── empty.null
    │   ├── chembl_approved
    │   │   └── empty.null
    │   ├── chembl_usan
    │   │   └── empty.null
    │   ├── clinicaltrials_gov
    │   │   └── empty.null
    │   ├── drugbank
    │   │   └── empty.null
    │   └── pubchem
    │       └── empty.null
    ├── e0_extract_chembl_usan_CHEMBL_details.py
    ├── f0_create_unified_chemical_database.py
    ├── g0_create_ct_ade_raw.py
    ├── g1_create_ct_ade_meddra.py
    ├── g2_create_ct_ade_classification_datasets.py
    ├── g3_create_ct_ade_friendly_labels.py
    ├── modeling
    │   ├── DLLMs
    │   │   ├── config.py
    │   │   ├── custom_metrics.py
    │   │   ├── model.py
    │   │   ├── train.py
    │   │   └── utils.py
    │   └── GLLMs
    │       ├── config-llama3.py
    │       ├── config-meditron.py
    │       ├── config-openbiollm.py
    │       ├── config.py
    │       ├── train_S.py
    │       ├── train_SG.py
    │       └── train_SGE.py
    ├── requirements.txt
    └── src
        └── meddra_graph.py
    

    Download Publically Available CT-ADE-SOC and CT-ADE-PT

    You can download the publicly available CT-ADE-SOC and CT-ADE-PT versions from HuggingFace. These datasets contain standardized annotations from ClinicalTrials.gov:

    Alternatively, the datasets are also available on Figshare:

    The above datasets are identical to the SOC and PT versions you will produce in the Typical Pipeline from Checkpoint section.

    Typical Pipeline from Checkpoint

    Follow this procedure if you aim to recreate the dataset detailed in our paper (CT-ADE-SOC, CT-ADE-PT).

    1. Place your data

    Place your unzipped MedDRA files in the directory ./data/MedDRA_25_0_English and your DrugBank XML database in the directory ./data/drugbank.

    2. Download checkpoint from HuggingFace

    Download chembl_approved, chembl_usan, clinicaltrials_gov, pubchem files and place them accordingly.

    3. Extract DrugBank DBID Details

    Extract drug details from the DrugBank database.

    python c0_extract_drugbank_dbid_details.py

    4. Create Unified Chemical Database

    Create a unified database combining information from PubChem, DrugBank, and ChEMBL.

    python f0_create_unified_chemical_database.py

    5. Create Raw CT-ADE Dataset

    Generate the raw CT-ADE dataset from the processed clinical trials data.

    python g0_create_ct_ade_raw.py

    6. Create MedDRA Annotations

    Annotate the CT-ADE dataset with MedDRA terms.

    python g1_create_ct_ade_meddra.py

    7. Create Classification Datasets

    Generate the final classification datasets for modeling.

    python g2_create_ct_ade_classification_datasets.py

    8. (Optional) Create User-Friendly Labels

    As an optional step, you can create a version of the dataset where MedDRA codes are replaced with user-friendly text labels. To do this, run the following command:

    python g3_create_ct_ade_friendly_labels.py

    Training Models

    Discriminative Models (DLLMs)

    Navigate to the modeling/DLLMs directory and run the training scripts with the desired configuration.

    cd modeling/DLLMs

    For single-GPU training, use this command:

    export CUDA_VISIBLE_DEVICES="0"; \
    export MIXED_PRECISION="bf16"; \
    FIRST_GPU=$(echo $CUDA_VISIBLE_DEVICES | cut -d ',' -f 1); \
    BASE_PORT=29500; \
    PORT=$(( $BASE_PORT + $FIRST_GPU )); \
    accelerate launch \
    --mixed_precision=$MIXED_PRECISION \
    --num_processes=$(( $(echo $CUDA_VISIBLE_DEVICES | grep -o "," | wc -l) + 1 )) \
    --num_machines=1 \
    --dynamo_backend=no \
    --main_process_port=$PORT \
    train.py

    For multi-GPU training, use this command:

    export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"; \
    export MIXED_PRECISION="bf16"; \
    FIRST_GPU=$(echo $CUDA_VISIBLE_DEVICES | cut -d ',' -f 1); \
    BASE_PORT=29500; \
    PORT=$(( $BASE_PORT + $FIRST_GPU )); \
    accelerate launch \
    --mixed_precision=$MIXED_PRECISION \
    --num_processes=$(( $(echo $CUDA_VISIBLE_DEVICES | grep -o "," | wc -l) + 1 )) \
    --num_machines=1 \
    --dynamo_backend=no \
    --main_process_port=$PORT \
    train.py

    Generative Models (GLLMs)

    Navigate to the modeling/GLLMs directory and run the training scripts for different configurations.

    cd modeling/GLLMs

    Example configurations for LLama3, OpenBioLLM, and Meditron are provided in the folder. You can copy the desired configuration into config.py and adjust it to your convenience. Next, you can execute the following for the SGE configuration:

    python train_SGE.py

    Visit original content creator repository
    https://github.com/ds4dh/CT-ADE