Blog

  • sphractal

    Sphractal

    ci-cd

    Description

    Sphractal is a package that provides functionalities to estimate the fractal dimension of complex 3D surfaces formed from overlapping spheres via box-counting algorithm.

    Background

    • Atomic objects in molecular and nanosciences such are often represented as collection of spheres with radii associated with the atomic radius of the individual component.

    • Some examples of these objects (inclusive of both fine- and coarse-grained representation of the individual components) are small molecules, proteins, nanoparticles, polymers, and porous materials such as zeolite, metal-organic framework (MOFs).
    • The overall properties of these objects are often significantly influenced by their surface properties, in particular the surface area available for interaction with other entities, which is related to the surface roughness.
    • Fractal dimension allows the surface complexity/roughness of objects to be measured quantitatively.
    • The fractal dimension could be estimated by applying the box-counting algorithm on surfaces represented as either:
      • approximated point cloud:

    • that are subsequently voxelised:

    • or mathematically exact surfaces:

    Features

    Aims

    • Representation of the surface as either voxelised point clouds or mathematically exact surfaces.
    • Efficient algorithm for 3D box-counting calculations.
    • Customisable parameters to control the level of detail and accuracy of the calculation.

    Installation

    Use pip or conda to install Sphractal:

    pip install sphractal
    conda install -c conda-forge sphractal

    Special Requirement for Point Cloud Surface Representation

    Sphractal requires a file compiled from another freely available repository for the functionalities related to voxelised point clouds surface representation to operate properly.

    This could be done by:

    • Downloading the source code from the repository to a directory of your choice:
    git clone https://github.com/jon-ting/fastbc.git
    • Compile the code into an executable file (which works on any operating system) by doing either one of the following compilations according to the instructions on the README.md page. This will decide whether you will be running the box counting algorithm with GPU acceleration. Feel free to rename the output file from the compilation:
    g++ 3DbinImBCcpu.cpp bcCPU.cpp -o 3DbinImBCcpu
    nvcc -O3 3DbinImBCgpu.cpp bcCUDA3D.cu -o 3DbinImBCgpu
    • (Optional) Setting the path to the compiled file as an environment variable accessible by Python (replace <PATH_TO_FASTBC> by the absolute path to the executable file you just built), otherwise you could always pass the path to the compiled file to the relevant functions:
    export FASTBC=<PATH_TO_FASTBC>

    Note that for the environment variable to be persistent (to still exist after the terminal is closed), the line should be added to your ~/.bashrc.

    Usage

    from sphractal import getExampleDataPath, runBoxCnt
    
    inpFile = getExampleDataPath()  # Replace with the path to your xyz or lmp file
    boxCntResults = runBoxCnt(inpFile)

    Check out the basic demonstration and application demonstration notebooks for further explanations and demonstrations!

    Documentation

    Detailed documentations are hosted by Read the Docs.

    Contributing

    Sphractal appreciates your enthusiasm and welcomes your expertise!

    Please check out the contributing guidelines and code of conduct. By contributing to this project, you agree to abide by its terms.

    License

    The project is distributed under an MIT License.

    Credits

    The package was created with cookiecutter using the py-pkgs-cookiecutter template. The speeding up of the inner functions via just-in-time compilations with Numba was inspired by the advice received during the NCI-NVIDIA Open Hackathon 2023.

    Contact

    Email: Jonathan.Ting@anu.edu.au/jonting97@gmail.com

    Feel free to reach out if you have any questions, suggestions, or feedback.

    Visit original content creator repository https://github.com/Jon-Ting/sphractal
  • csound-plugins

    External plugins for csound

    This is a repository for plugins for csound. It includes
    multiple plugins, where each plugin contains a series of opcodes.


    Documentation of all plugins

    Go to Documentation


    Plugins in this repository

    klib

    very efficient hashtables (dictionaries) and other data structures for csound

    poly

    Parallel and sequential multiplexing opcodes, they enable the creation and
    control of multiple instances of a csound opcode

    beosc

    additive synthesis implementing the loris model sine+noise

    else

    A miscellaneous collection of effects (distortion, saturation, ring-modula
    generators (low freq. noise, chaos attractors, etc), envelope generators, etc.

    jsfx

    jsfx support in csound, allows any REAPER’s jsfx plugin to be loaded and
    controlled inside csound

    pathtools

    opcodes to handle paths and filenames in a cross-platform manner


    Installation

    The recommended way to install plugins is via risset
    (https://github.com/csound-plugins/risset). Risset itself
    can be installed via pip install risset.

    Then, to install any plugin:

    risset update
    risset install <pluginname>

    For example, to install klib and poly:

    risset install klib poly

    Using risset to install plugins also ensures integration
    with other tools like CsoundQt. Risset also can be used
    to show manual pages, list opcodes, etc.


    Manual Installation

    Plugins can be manually downloaded from the releases page:

    https://github.com/csound-plugins/csound-plugins/releases

    The binaries need to be copied to the plugins directory. The
    directory needs to be created if it does not exist.

    Platform Csound Version Plugins Path
    linux 6 $HOME/.local/lib/csound/6.0/plugins64
    linux 7 $HOME/.local/lib/csound/7.0/plugins64
    windows 6 C:\\Users\\$USERNAME\\AppData\\Local\\csound\\6.0\\plugins64
    windows 7 C:\\Users\\$USERNAME\\AppData\\Local\\csound\\7.0\\plugins64
    macos 6 $HOME/Library/csound/6.0/plugins64
    macos 7 $HOME/Library/csound/7.0/plugins64

    Note for Mac Users

    You will probably have to overcome Apple’s security mechanism to use the plugins.
    Right-click on each plugin and choose “Open with Terminal”. Confirm “Open” in the dialog panel.


    Build

    git clone  https://github.com/csound-plugins/csound-plugins
    cd csound-plugins
    git submodule update --init --recursive
    mkdir build
    cd build
    cmake ..
    cmake --build . --parallel
    cmake --install .

    Build documentation

    TODO

    Visit original content creator repository
    https://github.com/csound-plugins/csound-plugins

  • espup

    espup

    Crates.io MSRV Continuous Integration Security audit Matrix

    rustup for esp-rs

    espup is a tool for installing and maintaining the required toolchains for developing applications in Rust for Espressif SoC’s.

    To better understand what espup installs, see the installation chapter of The Rust on ESP Book

    Requirements

    Before running or installing espup, make sure that rustup is installed.

    Linux systems also require the following packages:

    • Ubuntu/Debian
      sudo apt-get install -y gcc build-essential curl pkg-config
    • Fedora
      sudo dnf -y install perl gcc
      • perl is required to build openssl-sys
    • openSUSE Thumbleweed/Leap
      sudo zypper install -y gcc ninja make
      

    Installation

    cargo install espup --locked

    It’s also possible to use cargo-binstall or to directly download the pre-compiled release binaries.

    Commands to install pre-compiled release binaries
    • Linux aarch64
      curl -L https://github.com/esp-rs/espup/releases/latest/download/espup-aarch64-unknown-linux-gnu -o espup
      chmod a+x espup
    • Linux x86_64
      curl -L https://github.com/esp-rs/espup/releases/latest/download/espup-x86_64-unknown-linux-gnu -o espup
      chmod a+x espup
    • macOS aarch64
      curl -L https://github.com/esp-rs/espup/releases/latest/download/espup-aarch64-apple-darwin -o espup
      chmod a+x espup
    • macOS x86_64
      curl -L https://github.com/esp-rs/espup/releases/latest/download/espup-x86_64-apple-darwin -o espup
      chmod a+x espup
    • Windows MSVC
      Invoke-WebRequest 'https://github.com/esp-rs/espup/releases/latest/download/espup-x86_64-pc-windows-msvc.exe' -OutFile .\espup.exe

    Quickstart

    See Usage section for more details.

    espup install

    Environment Variables Setup

    After installing the toolchain, on Unix systems, you need to source a file that will export the environment variables. This file is generated by espup and is located in your home directory by default. There are different ways to source the file:

    • Source this file in every terminal:

      1. Source the export file: . $HOME/export-esp.sh

      This approach requires running the command in every new shell.

    • Create an alias for executing the export-esp.sh:

      1. Copy and paste the following command to your shell’s profile (.profile, .bashrc, .zprofile, etc.): alias get_esprs='. $HOME/export-esp.sh'
      2. Refresh the configuration by restarting the terminal session or by running source [path to profile], for example, source ~/.bashrc.

      This approach requires running the alias in every new shell.

    • Add the environment variables to your shell profile directly:

      1. Add the content of $HOME/export-esp.sh to your shell’s profile: cat $HOME/export-esp.sh >> [path to profile], for example, cat $HOME/export-esp.sh >> ~/.bashrc.
      2. Refresh the configuration by restarting the terminal session or by running source [path to profile], for example, source ~/.bashrc.

    Important

    On Windows, environment variables are automatically injected into your system and don’t need to be sourced.

    Usage

    Usage: espup <COMMAND>
    
    Commands:
      completions  Generate completions for the given shell
      install      Installs Espressif Rust ecosystem
      uninstall    Uninstalls Espressif Rust ecosystem
      update       Updates Xtensa Rust toolchain
      help         Print this message or the help of the given subcommand(s)
    
    Options:
      -h, --help     Print help
      -V, --version  Print version
    

    Completions Subcommand

    For detailed instructions on how to enable tab completion, see Enable tab completion for Bash, Fish, Zsh, PowerShell or NuShell section.

    Usage: espup completions [OPTIONS] <SHELL>
    
    Arguments:
      <SHELL>  Shell to generate completions for [possible values: bash, zsh, fish, powershell, elvish, nushell]
    
    Options:
      -l, --log-level <LOG_LEVEL>  Verbosity level of the logs [default: info] [possible values: debug, info, warn, error]
      -h, --help                   Print help
    

    Install Subcommand

    Note

    Xtensa Rust destination path

    Installation paths can be modified by setting the environment variables CARGO_HOME and RUSTUP_HOME before running the install command. By default, toolchains will be installed under <rustup_home>/toolchains/esp, although this can be changed using the -a/--name option.

    Note

    GitHub API

    During the installation process, several GitHub queries are made, which are subject to certain limits. Our number of queries should not hit the limit unless you are running espup install command numerous times in a short span of time. We recommend setting the GITHUB_TOKEN environment variable when using espup in CI, if you want to use espup on CI, recommend using it via the xtensa-toolchain action, and making sure GITHUB_TOKEN is not set when using it on a host machine. See esp-rs/xtensa-toolchain#15 for more details on this.

    Usage: espup install [OPTIONS]
    
    Options:
      -d, --default-host <DEFAULT_HOST>
              Target triple of the host
    
              [possible values: x86_64-unknown-linux-gnu, aarch64-unknown-linux-gnu, x86_64-pc-windows-msvc, x86_64-pc-windows-gnu, x86_64-apple-darwin, aarch64-apple-darwin]
    
    -r, --esp-riscv-gcc
              Install Espressif RISC-V toolchain built with croostool-ng
    
              Only install this if you don't want to use the systems RISC-V toolchain
    
      -f, --export-file <EXPORT_FILE>
              Relative or full path for the export file that will be generated. If no path is provided, the file will be generated under home directory (https://docs.rs/dirs/latest/dirs/fn.home_dir.html)
    
              [env: ESPUP_EXPORT_FILE=]
    
      -e, --extended-llvm
              Extends the LLVM installation.
    
              This will install the whole LLVM instead of only installing the libs.
    
      -l, --log-level <LOG_LEVEL>
              Verbosity level of the logs
    
              [default: info]
              [possible values: debug, info, warn, error]
    
      -a, --name <NAME>
              Xtensa Rust toolchain name
    
              [default: esp]
    
      -b, --stable-version <STABLE_VERSION>
              Stable Rust toolchain version.
    
              Note that only RISC-V targets use stable Rust channel.
    
              [default: stable]
    
      -k, --skip-version-parse
              Skips parsing Xtensa Rust version
    
      -s, --std
              Only install toolchains required for STD applications.
    
              With this option, espup will skip GCC installation (it will be handled by esp-idf-sys), hence you won't be able to build no_std applications.
    
      -t, --targets <TARGETS>
              Comma or space separated list of targets [esp32,esp32c2,esp32c3,esp32c6,esp32h2,esp32s2,esp32s3,esp32p4,all]
    
              [default: all]
    
      -v, --toolchain-version <TOOLCHAIN_VERSION>
              Xtensa Rust toolchain version
    
      -h, --help
              Print help (see a summary with '-h')
    

    Uninstall Subcommand

    Usage: espup uninstall [OPTIONS]
    
    Options:
      -l, --log-level <LOG_LEVEL>  Verbosity level of the logs [default: info] [possible values: debug, info, warn, error]
      -a, --name <NAME>            Xtensa Rust toolchain name [default: esp]
      -h, --help                   Print help
    

    Update Subcommand

    Usage: espup update [OPTIONS]
    
    Options:
      -d, --default-host <DEFAULT_HOST>
              Target triple of the host
    
              [possible values: x86_64-unknown-linux-gnu, aarch64-unknown-linux-gnu, x86_64-pc-windows-msvc, x86_64-pc-windows-gnu, x86_64-apple-darwin, aarch64-apple-darwin]
    
      -f, --export-file <EXPORT_FILE>
              Relative or full path for the export file that will be generated. If no path is provided, the file will be generated under home directory (https://docs.rs/dirs/latest/dirs/fn.home_dir.html)
    
              [env: ESPUP_EXPORT_FILE=]
    
      -e, --extended-llvm
              Extends the LLVM installation.
    
              This will install the whole LLVM instead of only installing the libs.
    
      -l, --log-level <LOG_LEVEL>
              Verbosity level of the logs
    
              [default: info]
              [possible values: debug, info, warn, error]
    
      -a, --name <NAME>
              Xtensa Rust toolchain name
    
              [default: esp]
    
      -b, --stable-version <STABLE_VERSION>
              Stable Rust toolchain version.
    
              Note that only RISC-V targets use stable Rust channel.
    
              [default: stable]
    
      -k, --skip-version-parse
              Skips parsing Xtensa Rust version
    
      -s, --std
              Only install toolchains required for STD applications.
    
              With this option, espup will skip GCC installation (it will be handled by esp-idf-sys), hence you won't be able to build no_std applications.
    
      -t, --targets <TARGETS>
              Comma or space separated list of targets [esp32,esp32c2,esp32c3,esp32c6,esp32h2,esp32s2,esp32s3,all]
    
              [default: all]
    
      -v, --toolchain-version <TOOLCHAIN_VERSION>
              Xtensa Rust toolchain version
    
      -h, --help
              Print help (see a summary with '-h')
    

    Enable Tab Completion for Bash, Fish, Zsh, or PowerShell

    espup supports generating completion scripts for Bash, Fish, Zsh, and PowerShell. See espup help completions for full details, but the gist is as simple as using one of the following:

    # Bash
    $ espup completions bash > ~/.local/share/bash-completion/completions/espup
    
    # Bash (macOS/Homebrew)
    $ espup completions bash > $(brew --prefix)/etc/bash_completion.d/espup.bash-completion
    
    # Fish
    $ mkdir -p ~/.config/fish/completions
    $ espup completions fish > ~/.config/fish/completions/espup.fish
    
    # Zsh
    $ espup completions zsh > ~/.zfunc/_espup
    
    # PowerShell v5.0+
    $ espup completions powershell >> $PROFILE.CurrentUserCurrentHost
    # or
    $ espup completions powershell | Out-String | Invoke-Expression
    
    # NuShell
    $ mkdir -p ~/.config/nushell/completions
    $ espup completions nushell > ~/.config/nushell/completions/espup.nu

    Note: you may need to restart your shell in order for the changes to take effect.

    For zsh, you must then add the following line in your ~/.zshrc before compinit:

    fpath+=~/.zfunc

    License

    Licensed under either of:

    at your option.

    Contribution

    Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

    Visit original content creator repository https://github.com/esp-rs/espup
  • phonebook

    Задание “Телефонная книга”

    Build Status

    App screenshot

    Запуск

    • Clone the repository:
    git clone https://github.com/a11exe/phonebook.git
    • Build the maven project:
    mvn clean install
    • Now run:
    docker-compose up

    Application will start on http://localhost:8080 use credentials: user1 / 12345

    • Stop containers:
    docker-compose down
    • Remove old stopped containers of docker-compose
    docker-compose rm -f

    Хранимые данные:

    Информация о пользователе в системе:

    • Логин (только английские символы, не меньше 3х, без спецсимволов)
    • Пароль (минимум 5 символов)
    • ФИО (минимум 5 символов)

    Хранимая информация (одна запись):

    • Фамилия (обязательный, минимум 4 символа)
    • Имя (обязательный, минимум 4 символа)
    • Отчество (обязательный, минимум 4 символа)
    • Телефон мобильный (обязательный)
    • Телефон домашний (не обязательный)
    • Адрес (не обязательный)
    • e-mail (не обязательный, общепринятая валидация)

    Задание:

    Реализовать Web проект “Телефонная книга”. Содержащий страницы:

    • авторизацию
    • Вход в систему (логин/пароль)
    • Регистрация
    • Выход из системы

    Работа с хранимыми данными

    • Просмотр всех данных с возможностью фильтрации по имени/фамилии и номеру телефона (не полное соответствие).
    • Добавление/Редактирование/Удаление хранимых записей

    Система доступна только авторизованным пользователям. Если пользователь не авторизован, при попытке открытия любой страницы его должно редиректить на страницу авторизации. На странице авторизации он может ввести логин и пароль для входа в систему или зарегистрироваться. При регистрации указываются поля: ФИО, логин и пароль.

    У каждого авторизованного пользователя имеется своя телефонная книга, т.е. каждый пользователь видит только те записи, которые он создал.

    Обратить внимание (обязательно к выполнению)

    • Админка для управления пользователями – не требуется.
    • Формат телефонов должен проверяется и быть валидным для Украины, пример: +380(66) 1234567
    • Приложение обязательно должно содержать JUnit тесты, максимально плотно покрывающие код.
    • Приветствуется использование Mockito.
    • Проект должен собираться средствами Maven
    • Для запуска использоваться SpringBoot
    • Все настройки приложения должны находится в properties файле, путь к которому должен передаваться в качестве аргументов JVM машине (-Dlardi.conf=/path/to/file.properties).
    • В конфигурационном файле указывается тип хранилища. Тип хранилища используется один раз при старте JVM (изменения в конфигурационном файле вступают в силу только при перезапуске JVM).
    • Реализовать минимум два варианта хранилища: СУБД (MySQL) и файл-хранилище (XML/ JSON/CSV на выбор).
    • Настройки хранилища должны указываться в файле-конфигурации (хост и пользователь для СУБД или путь к файлу для файлового хранилища).
    • Для файлового хранилища – в случае отсутствия файла(ов) – его(их) необходимо создать.
    • Для СУБД-хранилища в файле README.md должен находится SQL запрос для создания всех необходимых таблиц.
    • Проверка данных должна осуществляться на стороне сервера.
    • Приложение должно содержать четкое логическое разделение между представление, логикой и источником данных.
    Visit original content creator repository https://github.com/a11exe/phonebook
  • apollo-log

    Apollo Server

    tests cover size

    apollo-log

    A logging plugin for Apollo GraphQL Server

    ❤️ Please consider Sponsoring my work

    apollo-server doesn’t ship with any comprehensive logging, and instead offloads that responsiblity to the users and the resolvers or context handler This module provides uniform logging for the entire GraphQL request lifecycle, as provided by plugin hooks in apollo-server. The console/terminal result of which will resemble the image below:

    Requirements

    apollo-log is an evergreen 🌲 module.

    This module requires an Active LTS Node version (v10.23.1+).

    Install

    Using npm:

    npm install apollo-log

    Usage

    Setting up apollo-log is straight-forward. Import and call the plugin function, passing any desired options, and pass the plugin in an array to apollo-server.

    import { ApolloLogPlugin } from 'apollo-log';
    import { ApolloServer } from 'apollo-server';
    
    const options = { ... };
    const plugins = [ApolloLogPlugin(options)];
    const apollo = new ApolloServer({
      plugins,
      ...
    });

    Please see the Apollo Plugins documentation for more information.

    Options

    events

    Type: Record<string, boolean>
    Default:

    {
      didEncounterErrors: true,
      didResolveOperation: false,
      executionDidStart: false,
      parsingDidStart: false,
      responseForOperation: false,
      validationDidStart: false,
      willSendResponse: true
    }

    Specifies which Apollo lifecycle events will be logged. The requestDidStart event is always logged, and by default didEncounterErrors and willSendResponse are logged.

    mutate

    Type: Function Default: (data: Record<string, string>) => Record<string, string>

    If specified, allows inspecting and mutating the data logged to the console for each message.

    prefix

    Type: String
    Default: apollo

    Specifies a prefix, colored by level, prepended to each log message.

    timestamp

    Type: Boolean

    If true, will prepend a timestamp in the HH:mm:ss format to each log message.

    Meta

    CONTRIBUTING

    LICENSE (Mozilla Public License)

    Visit original content creator repository https://github.com/shellscape/apollo-log
  • e2ee-msg-processor-js

    e2ee-msg-processor-js

    End to end encryption message processor which can be used to encrypt messages between two or more devices running on any platform and sent over any messaging protocol.

    It uses an external library for the implementation of the Double Ratchet Algorithm to handle session creation and key exchange (https://matrix.org/docs/projects/other/olm).

    The key exchange and message format is loosely based on the OMEMO protocol which utilises 128 bit AES-GCM. Although OMEMO is an extension of the XMPP protocol, it doesn’t require XMPP as the transmission medium. The message format is output as json and can be reconfigured for transmission at the developer’s descretion.

    The LocalStorage interface will need implementing in order to provide a means of storing the sessions. In the example below we’ve used node-localstorage which is sufficent for our needs, however other situations may require a different storage mechanism so the implementation is left to the developer.

    Here’s a contrived example simulating sending a message between Alice and Bob:

    import { LocalStorage } from 'node-localstorage';
    import { OmemoManager } from 'e2ee-msg-processor-js';
    
    (async () => {
        await OmemoManager.init();
    
        const aliceLocalStorage = new LocalStorage('./local_storage/aliceStore');
        const aliceOmemoManager = new OmemoManager('alice', aliceLocalStorage);
    
        const bobLocalStorage = new LocalStorage('./local_storage/bobStore');
        const bobOmemoManager = new OmemoManager('bob', bobLocalStorage);
        //bundle and device id need to be published via XMPP pubsub, or an equivalent service so that they are available for Alice and other clients devices wishing to communicate with Bob
        const bobsBundle = bobOmemoManager.generateBundle();
    
        aliceOmemoManager.processDevices('bob', [bobsBundle]);
        //This message object can be mapped to an XMPP send query or just sent as JSON over TLS or some other secure channel.
        const aliceToBobMessage = await aliceOmemoManager.encryptMessage('bob', 'To Bob from Alice');
    
        //Bob will then receive the message and process it
        const aliceDecrypted = await bobOmemoManager.decryptMessage(aliceToBobMessage);
        console.log(aliceDecrypted);
        
        //Bob can then reply without the need for a key bundle from Alice
        const bobToAliceMessage = await bobOmemoManager.encryptMessage('alice', 'To Alice from Bob');
        const bobDecrypted = await aliceOmemoManager.decryptMessage(bobToAliceMessage);
        console.log(bobDecrypted);
    
    })();
    

    WARNING: THIS LIBRARY IS UNTESTED AND THEREFORE INSECURE. USE AT YOUR OWN RISK.

    If you’re a cryptography researcher then please by all means try and break this and submit an issue or a PR.

    Visit original content creator repository
    https://github.com/cmyers/e2ee-msg-processor-js

  • MLOpsLifeCycle

    MLOps Lifecycle Project

    Requirements

    1. Model Storage: The trained model should be stored with a unique version, along with its hyperparameters and accuracy, in a storage solution like S3. This requires extending the Python script to persist this information.
    2. Scheduled Training: The model should be trained daily, with a retry mechanism for failures and an SLA defined for the training process.
    3. Model Quality Monitoring: Model accuracy should be tracked over time, with a dashboard displaying weekly average accuracy.
    4. Alerting: Alerts should be configured for:
      • If the latest model was generated more than 36 hours ago.
      • If the model accuracy exceeds a predefined threshold.
    5. Model API Access: The model should be accessible via an API for predictions. The API should pull the latest model version whenever available.

    Architecture Diagram

    alt text

    The architecture is designed to orchestrate the MLOps lifecycle across multiple components, as shown in the diagram.

    Component Descriptions

    • GitHub: Repository for storing the source code of the machine learning pipeline. It supports CI/CD to deploy the pipeline to the target environment.
    • Kubernetes: Container orchestrator that runs and manages all pipeline components.
    • Kubeflow: Manages and schedules the machine learning pipeline, deployed on Kubernetes.
    • MLFlow: Tracks experiments and serves as a model registry.
    • Minio: S3-compatible object storage for training datasets and MLFlow model artifacts.
    • MySQL: Backend database for MLFlow, storing information on experiments, runs, metrics, and parameters.
    • KServe: Exposes the trained model as an API, allowing predictions at scale over Kubernetes.
    • Grafana: Generates dashboards for accuracy metrics and manages alerting.
    • Slack: Receives notifications for specific metrics and alerts when data is unavailable.

    System Workflow

    1. Pipeline Development: An ML Engineer creates or modifies the training pipeline code in GitHub.
    2. CI/CD Deployment: CI/CD tests the pipeline, and once cleared, deploys it to Kubeflow with a user-defined schedule.
    3. Pipeline Execution:
      • The pipeline is triggered on schedule, initiating the sequence of tasks.
      • Data Fetching: Raw data is read from Minio.
      • Preprocessing: Data is preprocessed and split into training and validation/test sets.
      • Model Training: The model is trained, with hyperparameters, metrics, and model weights stored in MLFlow.
      • Deployment: The trained model is deployed via KServe, making it available as an API on Kubernetes.
      • Notifications: Slack notifications are triggered if pipeline metrics exceed defined thresholds (e.g., accuracy > 95%).
    4. Monitoring and Alerting:
      • Grafana Dashboard: Utilizes MLFlow’s MySQL database to visualize metrics, such as model accuracy.
      • Slack Alerts: Alerts are sent to Slack if no model has been updated within the last 36 hours.

    Implementation Details

    • CI/CD: GitHub is shown in the architecture as the source and CI/CD provider, but it is not fully implemented here as my local resources could not connect with GitHub Actions (self-hosted runner setup was not used).

    Training Pipeline SLA Dimensions

    These SLA dimensions represent potential enhancements that could further improve the pipeline’s reliability and performance:

    • Training Frequency: Ideally, the pipeline should train the model daily to maintain relevance. Improved scheduling would enhance consistency.
    • Retry Mechanism: Implementing retries for errors would improve resilience.
    • Execution Time Limits: Adding maximum execution times for training runs would prevent long-running processes and increase efficiency.
    • Availability: Regular model updates (e.g., every 24-36 hours) would improve reliability, with alerts for delayed runs providing faster issue resolution.
    • Alerting: Alerts for accuracy deviations would aid in quicker troubleshooting.
    • Resource Usage: Resource limits for CPU and memory would optimize system performance and prevent overuse.
    • Data Freshness: Ensuring that input data is frequently updated would enhance model quality.
    • Enhanced Monitoring: Tracking additional metrics like accuracy trends, execution times, and resource utilization would improve insight into pipeline performance.

    These are aspirational improvements that would make the pipeline more robust and production-ready. The current implementation covers some aspects and could be improved if sufficient time and resources become available.

    Setup Guide

    This guide provides detailed instructions for setting up an MLOps environment using Minikube, Kubeflow, MLflow, KServe, Minio, Grafana, and Slack for notifications. It covers prerequisites, environment setup, and necessary configurations to deploy and monitor a machine learning pipeline.

    • This guide assumes you have a basic understanding of Kubernetes, Docker and Python.
    • Python3.8+ and Docker should be installed on your system before starting the guide.
    • Moreover, a Slack namespace is required with permissions to setup the App and Generate webhooks for Alerts setup.

    Pre-requisites

    Install Minikube

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
    # alteast 4 CPUs, 8GB RAM, and 40GB disk space
    minikube start --cpus 4 --memory 8096 --disk-size=40g

    Link kubectl from Minikube If it’s not already installed:

    sudo ln -s $(which minikube) /usr/local/bin/kubectl

    Install helm if not already installed:

    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

    Kubeflow Setup

    1. Install Kubeflow Pipelines :
    export PIPELINE_VERSION=2.3.0
    kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
    kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
    kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
    kubectl set env deployment/ml-pipeline-ui -n kubeflow DISABLE_GKE_METADATA=true
    kubectl wait --for=condition=ready pod -l application-crd-id=kubeflow-pipelines --timeout=1000s -n kubeflow
    # You can Ctr1+C if all pods except proxy-agent are running

    Note : Allow up to 20 minutes for all Kubeflow pods to be running. Verify the status:

    kubectl get pods -n kubeflow

    Ensure that all pods are running (except proxy-agent, which is not applicable to us as it serves as proxy for establishing connection to Google Cloud’s SQL and uses GCP metadata).

    1. Expose the ml-pipeline-ui via NodePort :
    kubectl patch svc ml-pipeline-ui -n kubeflow --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
    minikube service ml-pipeline-ui -n kubeflow

    Take note of the address to access ml-pipeline-ui as you’ll use it for pipeline setup.

    MLflow Setup

    1. Build and Push MLflow with Mysql Docker Image to Dockerhub (Continue if your system is not linux/amd64 architecture, otherwise skip this step as the default syedshameersarwar/mlflow-mysql:v2.0.1 works for linux/amd64):
    cd ./mlflow
    docker build -t mlflow-mysql .
    docker login
    # create docker repository on docker hub for storing the image
    docker tag mlflow-mysql:latest <dockerhub-username>/<docker-repository>:latest
    docker push <dockerhub-username>/<docker-repository>:latest
    # Make sure to update mlfow.yaml to reference the pushed image
    1. Deploy MLflow Components :
    kubectl create ns mlflow
    kubectl apply -f pvc.yaml -n mlflow
    # Update secret.yaml with desired base64-encoded MySQL and Minio credentials, defaults are provided in the file
    kubectl apply -f secret.yaml -n mlflow
    kubectl apply -f minio.yaml -n mlflow
    kubectl apply -f mysql.yaml -n mlflow
    # Check if MySQL and Minio pods are running
    kubectl get pods -n mlflow
    # if all pods are running, proceed to the next step
    kubectl apply -f mlflow.yaml -n mlflow

    Verify Deployment : Check if MLflow pod is running.

    kubectl get pods -n mlflow
    1. Expose MLflow Service via Minikube :
    minikube service mlflow-svc -n mlflow

    Note the address to access MLflow UI.

    KServe Setup

    1. Clone and Install KServe :
    cd ..
    git clone https://github.com/kserve/kserve.git
    cd kserve
    ./hack/quick_install.sh

    Verify Installation : Check all necessary pods:

    kubectl get pods -n kserve
    kubectl get pods -n knative-serving
    kubectl get pods -n istio-system
    kubectl get pods -n cert-manager
    # if all pods are running, go back to the root directory
    cd ..
    1. Configure Service Account and Cluster Role :
    • Copy Minio credentials (base64-encoded) from MLflows secret.yaml to sa/kserve-mlflow-sa.yaml.

      • The user field will populate AWS_ACCESS_KEY_ID in sa/kserve-mlflow-sa.yaml.
      • The secretkey field will populate AWS_SECRET_ACCESS_KEY in sa/kserve-mlflow-sa.yaml.
      • Leave the region field as it is. (base64 encoded for us-east-1).
    • Apply the service account and cluster role:

    # allow kserve to access Minio
    kubectl apply -f sa/kserve-mlflow-sa.yaml
    # allow Kubeflow to access kserve and create inferenceservices
    kubectl apply -f sa/kserve-kubeflow-clusterrole.yaml

    Minio Bucket Creation

    1. Port-forward Minio UI :
    kubectl port-forward svc/minio-ui -n mlflow 9001:9001
    1. Login and Create Buckets
    • Access localhost:9001 with credentials (base64-decoded) you setup in secret.yaml of MLFlow and create two buckets:
      • data

      • mlflow

    • Then, navigate to Object Browser -> mlflow -> Create a new path called experiments.
    • Now upload the iris.csv dataset to the data bucket.
    • You can close the port-forwarding once done by pressing Ctrl+C.

    Slack Notifications

    1. Create a Slack App for notifications:
    • Follow the Slack API Quickstart Guide and obtain a webhook URL for your Slack workspace. (You can skip invite part of step 3 and step 4 entirely).

    • Test with curl as shown in the Slack setup guide.

    Pipeline Compilation and Setup

    1. Compile Pipeline :
    # Create a python virtual environment for pipeline compilation
    mkdir src
    cd src
    python3 -m venv env
    source env/bin/activate
    pip install kfp[kubernetes]
    • Create a pipeline.py file in this directory and include the contents from src/pipeline.py.

    • Update the Slack webhook_url on line #336 in pipeline.py.

    1. Generate Pipeline YAML :
    python3 pipeline.py
    # deactivate the virtual env after pipeline generation
    deactivate

    The generated file iris_mlflow_kserve_pipeline.yaml is now ready for upload to Kubeflow.

    1. Upload and Schedule Pipeline in Kubeflow :
    • Visit the ml-pipeline-ui address from Kubeflow setup and Click on Upload Pipeline.

    • Give pipeline a name (e.g., IrisExp), just make sure its small as I faced an issue for large name where Kubeflow was not able to read pod information due to large pod names.

    • Upload the generated iris_mlflow_kserve_pipeline.yaml file.

    • Go to Experiments -> Create New Experiment and name it iris-mlflow-kserve.

    • Configure a recurring run :

      • Goto Recurring Runs -> Creating Recurring Run -> Select Pipeline created from above
      • Make sure to give a small name for Recurring run config due to the same issue as mentioned above.
      • Select the iris-mlflow-kserve experiment.
      • Setup Run Type and Run Trigger details as mentioned below:
        • Run Type : Recurring

        • Trigger Type : Periodic, every 1 day, Maximum concurrent runs: 1, Catchup: False.

        • Run Parameters : data_path: /data

    • Click Start and the pipeline will run daily starting from the next day at the time of run creation.

    You can also manually trigger a one-off run for testing purposes.

    • Follow the same steps as above but select Run Type as One-off and click Start.
    • This will trigger the pipeline immediately. You can monitor the pipeline in the Runs tab.
    • alt text
    • alt text

    Model Inference

    1. After succesfull pipeline exection, you can get API Endpoint for Inference :
    kubectl get inferenceservice -n kubeflow
    • Note the service name and URL host.
    1. Make Prediction Request : Create an input.json file with the following content:
    {
      "instances": [
        [6.8, 2.8, 4.8, 1.4],
        [5.1, 3.5, 1.4, 0.2],
        [6.0, 3.4, 4.5, 1.6],
        [4.8, 3.4, 1.6, 0.2]
      ]
    }
    1. Use curl to send the request:
    export INGRESS_HOST=$(minikube ip)
    export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
    export INF_SERVICE_NAME=<inference-service-name>
    export INF_SERVICE_HOST=<inference-service-host>
    curl -H "Host:${INF_SERVICE_HOST}" -H "Content-Type: application/json" "http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/${INF_SERVICE_NAME}:predict" -d @./input.json
    • You should see the predictions for the input data.

    You can also visit MLflow UI to see the model versions and experiment details with metrics, hyperparameters, and artifacts. alt text

    alt text

    alt text

    alt text

    Grafana Setup for Monitoring and Alerts

    Grafana is used to setup the dashboard for monitoring accuracy metric trends on hourly, daily, and weekly basis. Moreover, no data alerts are configured to notify if no new models are trained in 36 hours.

    1. Deploy Grafana :
    # move to the root directory if not already
    cd ..
    cd grafana
    kubectl create ns grafana
    kubectl apply -f grafana.yaml -n grafana
    # Check if Grafana pod is running
    kubectl get pods -n grafana
    # if pod is running, expose the service
    minikube service grafana -n grafana
    1. Login to Grafana with default credentials (admin/admin) and change the password if you want.
    2. Set Up MySQL as Data Source :
    • Go to Connections -> Data Sources -> Add data source -> MySQL and configure with MySQL credentials (base64-decoded) from secret.yaml of MLFlow.
      • Host : mysql-svc.mlflow:3306

      • Database : <database> in secret.yaml (mysql-secrets) during mlflow setup (default: mlflowdb)

      • User: <username> in secret.yaml (mysql-secrets) during mlflow setup (default: admin)

      • Password: <password> in secret.yaml (mysql-secrets) during mlflow setup (default: vIRtUaLMinDs)

    • Save and Test the connection.

    Add Dashboard for Accuracy Metrics

    1. Import the Dashboard:
      • Navigate to Dashboards -> New -> Import.
      • Upload the file accuracy_metrics.json from grafana/.
      • Optionally, change the dashboard name.
      • Select the MySQL datasource created in previous steps.
      • Click Import to make the dashboard visible.

    Set Up Alerts in Grafana

    1. Create a Slack Contact Point:

      • Navigate to Alerting -> Contact points -> Create contact point.
      • Provide a descriptive name for the contact point.
      • Under Integration, select Slack and enter the webhook URL obtained during Slack setup.
      • Test the configuration to ensure the Slack notification works.
      • Click Save.
    2. Create an Alert Rule:

      • Navigate to Alerting -> Alert Rules -> New Alert Rule.
      • Provide a name for the rule, such as Iris No Data Alert.
      • Under Define query, select Code instead of Query Builder, and enter the following SQL query:
        select COUNT(m.timestamp) as recordcount 
        FROM experiments as e 
        INNER JOIN runs as r ON r.experiment_id = e.experiment_id 
        INNER JOIN metrics as m ON r.run_uuid = m.run_uuid 
        where e.name = 'iris-experiment' 
        HAVING MAX(FROM_UNIXTIME(m.timestamp/1000)) > DATE_ADD(NOW(), INTERVAL -36 HOUR);
      • In Rule Type -> Expressions, delete the default expressions and add a new Threshold expression.
        • Set the alert condition: WHEN Input A IS BELOW 1.
      • Click Preview to verify:
        • The status should be Normal if a new model run has occurred within the last 36 hours, otherwise it will show Alert.
    3. Configure Evaluation Behavior:

      • Create a new folder named no-data.
      • Create an evaluation group named no-data with an evaluation interval of 5 minutes.
      • Set Pending period to None.
      • Under No data and error handling:
        • Select Alerting for Alert state if no data or all values are null.
        • Select Normal for Alert state if execution error or timeout.
    4. Configure Labels and Notifications:

      • Add the Slack contact point created earlier under Label and Notifications.
      • Provide a summary and description for the notification message.
    5. Save and Exit:

      • Save the rule to complete the setup.

    Expected Outcome

    • You should receive a Slack notification if no model has been trained and registered within the last 36 hours in the experiment iris-experiment.

    • The Grafana dashboard also provides insights into the average accuracy metrics over different periods. While the current pipeline runs daily, this setup would offer useful insights if the training frequency changes, including hourly, daily, or weekly trends. alt text

      alt text

    Limitations and Alternatives

    1. Retry Issue: Although retries are configured, a known issue with Kubeflow (issue #11288) prevents it from working as expected. Alternatives such as Apache Airflow or VertexAI could address this limitation.
    2. Single-Environment Setup: The current setup operates in a single environment, lacking the flexibility of development, staging, and production environments.
    3. Manual Intervention: There is no manual review process before deploying a model to production, which may be beneficial. Alternatives like Apache Airflow’s custom sensors could allow manual interventions.
    4. Kubernetes Dependency: As a fully Kubernetes-native system, each pipeline component runs as a pod. This design is suitable for high-resource nodes but may not work well in low-resource environments.
    5. Additional Considerations: Code readability, testability, scalability, GPU node scheduling, distributed training, and resource optimization are important aspects to consider for long-term scalability and robustness.

    Cleanup

    minikube stop
    minikube delete

    Conclusion

    This guide provides a comprehensive setup for an MLOps lifecycle, covering model training, monitoring, alerting, and API deployment. While the implementation is limited by time and resource constraints, it offers a solid foundation for a production-ready MLOps environment. The architecture diagram, system workflow, and SLA dimensions provide a clear understanding of the system’s components and requirements. By following the setup guide, users can deploy and monitor the machine learning pipeline, track model accuracy, and receive alerts for critical metrics. The guide also highlights potential enhancements and alternative solutions to address limitations and improve the system’s reliability and performance.

    Visit original content creator repository https://github.com/syedshameersarwar/MLOpsLifeCycle
  • MLOpsLifeCycle

    MLOps Lifecycle Project

    Requirements

    1. Model Storage: The trained model should be stored with a unique version, along with its hyperparameters and accuracy, in a storage solution like S3. This requires extending the Python script to persist this information.
    2. Scheduled Training: The model should be trained daily, with a retry mechanism for failures and an SLA defined for the training process.
    3. Model Quality Monitoring: Model accuracy should be tracked over time, with a dashboard displaying weekly average accuracy.
    4. Alerting: Alerts should be configured for:
      • If the latest model was generated more than 36 hours ago.
      • If the model accuracy exceeds a predefined threshold.
    5. Model API Access: The model should be accessible via an API for predictions. The API should pull the latest model version whenever available.

    Architecture Diagram

    alt text

    The architecture is designed to orchestrate the MLOps lifecycle across multiple components, as shown in the diagram.

    Component Descriptions

    • GitHub: Repository for storing the source code of the machine learning pipeline. It supports CI/CD to deploy the pipeline to the target environment.
    • Kubernetes: Container orchestrator that runs and manages all pipeline components.
    • Kubeflow: Manages and schedules the machine learning pipeline, deployed on Kubernetes.
    • MLFlow: Tracks experiments and serves as a model registry.
    • Minio: S3-compatible object storage for training datasets and MLFlow model artifacts.
    • MySQL: Backend database for MLFlow, storing information on experiments, runs, metrics, and parameters.
    • KServe: Exposes the trained model as an API, allowing predictions at scale over Kubernetes.
    • Grafana: Generates dashboards for accuracy metrics and manages alerting.
    • Slack: Receives notifications for specific metrics and alerts when data is unavailable.

    System Workflow

    1. Pipeline Development: An ML Engineer creates or modifies the training pipeline code in GitHub.
    2. CI/CD Deployment: CI/CD tests the pipeline, and once cleared, deploys it to Kubeflow with a user-defined schedule.
    3. Pipeline Execution:
      • The pipeline is triggered on schedule, initiating the sequence of tasks.
      • Data Fetching: Raw data is read from Minio.
      • Preprocessing: Data is preprocessed and split into training and validation/test sets.
      • Model Training: The model is trained, with hyperparameters, metrics, and model weights stored in MLFlow.
      • Deployment: The trained model is deployed via KServe, making it available as an API on Kubernetes.
      • Notifications: Slack notifications are triggered if pipeline metrics exceed defined thresholds (e.g., accuracy > 95%).
    4. Monitoring and Alerting:
      • Grafana Dashboard: Utilizes MLFlow’s MySQL database to visualize metrics, such as model accuracy.
      • Slack Alerts: Alerts are sent to Slack if no model has been updated within the last 36 hours.

    Implementation Details

    • CI/CD: GitHub is shown in the architecture as the source and CI/CD provider, but it is not fully implemented here as my local resources could not connect with GitHub Actions (self-hosted runner setup was not used).

    Training Pipeline SLA Dimensions

    These SLA dimensions represent potential enhancements that could further improve the pipeline’s reliability and performance:

    • Training Frequency: Ideally, the pipeline should train the model daily to maintain relevance. Improved scheduling would enhance consistency.
    • Retry Mechanism: Implementing retries for errors would improve resilience.
    • Execution Time Limits: Adding maximum execution times for training runs would prevent long-running processes and increase efficiency.
    • Availability: Regular model updates (e.g., every 24-36 hours) would improve reliability, with alerts for delayed runs providing faster issue resolution.
    • Alerting: Alerts for accuracy deviations would aid in quicker troubleshooting.
    • Resource Usage: Resource limits for CPU and memory would optimize system performance and prevent overuse.
    • Data Freshness: Ensuring that input data is frequently updated would enhance model quality.
    • Enhanced Monitoring: Tracking additional metrics like accuracy trends, execution times, and resource utilization would improve insight into pipeline performance.

    These are aspirational improvements that would make the pipeline more robust and production-ready. The current implementation covers some aspects and could be improved if sufficient time and resources become available.

    Setup Guide

    This guide provides detailed instructions for setting up an MLOps environment using Minikube, Kubeflow, MLflow, KServe, Minio, Grafana, and Slack for notifications. It covers prerequisites, environment setup, and necessary configurations to deploy and monitor a machine learning pipeline.

    • This guide assumes you have a basic understanding of Kubernetes, Docker and Python.
    • Python3.8+ and Docker should be installed on your system before starting the guide.
    • Moreover, a Slack namespace is required with permissions to setup the App and Generate webhooks for Alerts setup.

    Pre-requisites

    Install Minikube

    curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
    # alteast 4 CPUs, 8GB RAM, and 40GB disk space
    minikube start --cpus 4 --memory 8096 --disk-size=40g

    Link kubectl from Minikube If it’s not already installed:

    sudo ln -s $(which minikube) /usr/local/bin/kubectl

    Install helm if not already installed:

    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

    Kubeflow Setup

    1. Install Kubeflow Pipelines :

    export PIPELINE_VERSION=2.3.0
    kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
    kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
    kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION"
    kubectl set env deployment/ml-pipeline-ui -n kubeflow DISABLE_GKE_METADATA=true
    kubectl wait --for=condition=ready pod -l application-crd-id=kubeflow-pipelines --timeout=1000s -n kubeflow
    # You can Ctr1+C if all pods except proxy-agent are running

    Note : Allow up to 20 minutes for all Kubeflow pods to be running. Verify the status:

    kubectl get pods -n kubeflow

    Ensure that all pods are running (except proxy-agent, which is not applicable to us as it serves as proxy for establishing connection to Google Cloud’s SQL and uses GCP metadata).

    1. Expose the ml-pipeline-ui via NodePort :

    kubectl patch svc ml-pipeline-ui -n kubeflow --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
    minikube service ml-pipeline-ui -n kubeflow

    Take note of the address to access ml-pipeline-ui as you’ll use it for pipeline setup.

    MLflow Setup

    1. Build and Push MLflow with Mysql Docker Image to Dockerhub (Continue if your system is not linux/amd64 architecture, otherwise skip this step as the default syedshameersarwar/mlflow-mysql:v2.0.1 works for linux/amd64):

    cd ./mlflow
    docker build -t mlflow-mysql .
    docker login
    # create docker repository on docker hub for storing the image
    docker tag mlflow-mysql:latest <dockerhub-username>/<docker-repository>:latest
    docker push <dockerhub-username>/<docker-repository>:latest
    # Make sure to update mlfow.yaml to reference the pushed image
    1. Deploy MLflow Components :

    kubectl create ns mlflow
    kubectl apply -f pvc.yaml -n mlflow
    # Update secret.yaml with desired base64-encoded MySQL and Minio credentials, defaults are provided in the file
    kubectl apply -f secret.yaml -n mlflow
    kubectl apply -f minio.yaml -n mlflow
    kubectl apply -f mysql.yaml -n mlflow
    # Check if MySQL and Minio pods are running
    kubectl get pods -n mlflow
    # if all pods are running, proceed to the next step
    kubectl apply -f mlflow.yaml -n mlflow

    Verify Deployment : Check if MLflow pod is running.

    kubectl get pods -n mlflow
    1. Expose MLflow Service via Minikube :
    minikube service mlflow-svc -n mlflow

    Note the address to access MLflow UI.

    KServe Setup

    1. Clone and Install KServe :

    cd ..
    git clone https://github.com/kserve/kserve.git
    cd kserve
    ./hack/quick_install.sh

    Verify Installation : Check all necessary pods:

    kubectl get pods -n kserve
    kubectl get pods -n knative-serving
    kubectl get pods -n istio-system
    kubectl get pods -n cert-manager
    # if all pods are running, go back to the root directory
    cd ..
    1. Configure Service Account and Cluster Role :
    • Copy Minio credentials (base64-encoded) from MLflows secret.yaml to sa/kserve-mlflow-sa.yaml.

      • The user field will populate AWS_ACCESS_KEY_ID in sa/kserve-mlflow-sa.yaml.
      • The secretkey field will populate AWS_SECRET_ACCESS_KEY in sa/kserve-mlflow-sa.yaml.
      • Leave the region field as it is. (base64 encoded for us-east-1).
    • Apply the service account and cluster role:

    # allow kserve to access Minio
    kubectl apply -f sa/kserve-mlflow-sa.yaml
    # allow Kubeflow to access kserve and create inferenceservices
    kubectl apply -f sa/kserve-kubeflow-clusterrole.yaml

    Minio Bucket Creation

    1. Port-forward Minio UI :
    kubectl port-forward svc/minio-ui -n mlflow 9001:9001
    1. Login and Create Buckets
    • Access localhost:9001 with credentials (base64-decoded) you setup in secret.yaml of MLFlow and create two buckets:
      • data

      • mlflow

    • Then, navigate to Object Browser -> mlflow -> Create a new path called experiments.
    • Now upload the iris.csv dataset to the data bucket.
    • You can close the port-forwarding once done by pressing Ctrl+C.

    Slack Notifications

    1. Create a Slack App for notifications:
    • Follow the Slack API Quickstart Guide and obtain a webhook URL for your Slack workspace. (You can skip invite part of step 3 and step 4 entirely).

    • Test with curl as shown in the Slack setup guide.

    Pipeline Compilation and Setup

    1. Compile Pipeline :

    # Create a python virtual environment for pipeline compilation
    mkdir src
    cd src
    python3 -m venv env
    source env/bin/activate
    pip install kfp[kubernetes]
    • Create a pipeline.py file in this directory and include the contents from src/pipeline.py.

    • Update the Slack webhook_url on line #336 in pipeline.py.

    1. Generate Pipeline YAML :

    python3 pipeline.py
    # deactivate the virtual env after pipeline generation
    deactivate

    The generated file iris_mlflow_kserve_pipeline.yaml is now ready for upload to Kubeflow.

    1. Upload and Schedule Pipeline in Kubeflow :
    • Visit the ml-pipeline-ui address from Kubeflow setup and Click on Upload Pipeline.

    • Give pipeline a name (e.g., IrisExp), just make sure its small as I faced an issue for large name where Kubeflow was not able to read pod information due to large pod names.

    • Upload the generated iris_mlflow_kserve_pipeline.yaml file.

    • Go to Experiments -> Create New Experiment and name it iris-mlflow-kserve.

    • Configure a recurring run :

      • Goto Recurring Runs -> Creating Recurring Run -> Select Pipeline created from above
      • Make sure to give a small name for Recurring run config due to the same issue as mentioned above.
      • Select the iris-mlflow-kserve experiment.
      • Setup Run Type and Run Trigger details as mentioned below:
        • Run Type : Recurring

        • Trigger Type : Periodic, every 1 day, Maximum concurrent runs: 1, Catchup: False.

        • Run Parameters : data_path: /data

    • Click Start and the pipeline will run daily starting from the next day at the time of run creation.

    You can also manually trigger a one-off run for testing purposes.

    • Follow the same steps as above but select Run Type as One-off and click Start.
    • This will trigger the pipeline immediately. You can monitor the pipeline in the Runs tab.
    • alt text
    • alt text

    Model Inference

    1. After succesfull pipeline exection, you can get API Endpoint for Inference :
    kubectl get inferenceservice -n kubeflow
    • Note the service name and URL host.
    1. Make Prediction Request :
      Create an input.json file with the following content:

    {
      "instances": [
        [6.8, 2.8, 4.8, 1.4],
        [5.1, 3.5, 1.4, 0.2],
        [6.0, 3.4, 4.5, 1.6],
        [4.8, 3.4, 1.6, 0.2]
      ]
    }
    1. Use curl to send the request:

    export INGRESS_HOST=$(minikube ip)
    export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
    export INF_SERVICE_NAME=<inference-service-name>
    export INF_SERVICE_HOST=<inference-service-host>
    curl -H "Host:${INF_SERVICE_HOST}" -H "Content-Type: application/json" "http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/${INF_SERVICE_NAME}:predict" -d @./input.json
    • You should see the predictions for the input data.

    You can also visit MLflow UI to see the model versions and experiment details with metrics, hyperparameters, and artifacts.
    alt text

    alt text

    alt text

    alt text

    Grafana Setup for Monitoring and Alerts

    Grafana is used to setup the dashboard for monitoring accuracy metric trends on hourly, daily, and weekly basis. Moreover, no data alerts are configured to notify if no new models are trained in 36 hours.

    1. Deploy Grafana :

    # move to the root directory if not already
    cd ..
    cd grafana
    kubectl create ns grafana
    kubectl apply -f grafana.yaml -n grafana
    # Check if Grafana pod is running
    kubectl get pods -n grafana
    # if pod is running, expose the service
    minikube service grafana -n grafana
    1. Login to Grafana with default credentials (admin/admin) and change the password if you want.
    2. Set Up MySQL as Data Source :
    • Go to Connections -> Data Sources -> Add data source -> MySQL and configure with MySQL credentials (base64-decoded) from secret.yaml of MLFlow.
      • Host : mysql-svc.mlflow:3306

      • Database : <database> in secret.yaml (mysql-secrets) during mlflow setup (default: mlflowdb)

      • User: <username> in secret.yaml (mysql-secrets) during mlflow setup (default: admin)

      • Password: <password> in secret.yaml (mysql-secrets) during mlflow setup (default: vIRtUaLMinDs)

    • Save and Test the connection.

    Add Dashboard for Accuracy Metrics

    1. Import the Dashboard:
      • Navigate to Dashboards -> New -> Import.
      • Upload the file accuracy_metrics.json from grafana/.
      • Optionally, change the dashboard name.
      • Select the MySQL datasource created in previous steps.
      • Click Import to make the dashboard visible.

    Set Up Alerts in Grafana

    1. Create a Slack Contact Point:

      • Navigate to Alerting -> Contact points -> Create contact point.
      • Provide a descriptive name for the contact point.
      • Under Integration, select Slack and enter the webhook URL obtained during Slack setup.
      • Test the configuration to ensure the Slack notification works.
      • Click Save.
    2. Create an Alert Rule:

      • Navigate to Alerting -> Alert Rules -> New Alert Rule.
      • Provide a name for the rule, such as Iris No Data Alert.
      • Under Define query, select Code instead of Query Builder, and enter the following SQL query:

        select COUNT(m.timestamp) as recordcount 
        FROM experiments as e 
        INNER JOIN runs as r ON r.experiment_id = e.experiment_id 
        INNER JOIN metrics as m ON r.run_uuid = m.run_uuid 
        where e.name = 'iris-experiment' 
        HAVING MAX(FROM_UNIXTIME(m.timestamp/1000)) > DATE_ADD(NOW(), INTERVAL -36 HOUR);
      • In Rule Type -> Expressions, delete the default expressions and add a new Threshold expression.
        • Set the alert condition: WHEN Input A IS BELOW 1.
      • Click Preview to verify:
        • The status should be Normal if a new model run has occurred within the last 36 hours, otherwise it will show Alert.
    3. Configure Evaluation Behavior:

      • Create a new folder named no-data.
      • Create an evaluation group named no-data with an evaluation interval of 5 minutes.
      • Set Pending period to None.
      • Under No data and error handling:
        • Select Alerting for Alert state if no data or all values are null.
        • Select Normal for Alert state if execution error or timeout.
    4. Configure Labels and Notifications:

      • Add the Slack contact point created earlier under Label and Notifications.
      • Provide a summary and description for the notification message.
    5. Save and Exit:

      • Save the rule to complete the setup.

    Expected Outcome

    • You should receive a Slack notification if no model has been trained and registered within the last 36 hours in the experiment iris-experiment.

    • The Grafana dashboard also provides insights into the average accuracy metrics over different periods. While the current pipeline runs daily, this setup would offer useful insights if the training frequency changes, including hourly, daily, or weekly trends.
      alt text

      alt text

    Limitations and Alternatives

    1. Retry Issue: Although retries are configured, a known issue with Kubeflow (issue #11288) prevents it from working as expected. Alternatives such as Apache Airflow or VertexAI could address this limitation.
    2. Single-Environment Setup: The current setup operates in a single environment, lacking the flexibility of development, staging, and production environments.
    3. Manual Intervention: There is no manual review process before deploying a model to production, which may be beneficial. Alternatives like Apache Airflow’s custom sensors could allow manual interventions.
    4. Kubernetes Dependency: As a fully Kubernetes-native system, each pipeline component runs as a pod. This design is suitable for high-resource nodes but may not work well in low-resource environments.
    5. Additional Considerations: Code readability, testability, scalability, GPU node scheduling, distributed training, and resource optimization are important aspects to consider for long-term scalability and robustness.

    Cleanup

    minikube stop
    minikube delete

    Conclusion

    This guide provides a comprehensive setup for an MLOps lifecycle, covering model training, monitoring, alerting, and API deployment. While the implementation is limited by time and resource constraints, it offers a solid foundation for a production-ready MLOps environment. The architecture diagram, system workflow, and SLA dimensions provide a clear understanding of the system’s components and requirements. By following the setup guide, users can deploy and monitor the machine learning pipeline, track model accuracy, and receive alerts for critical metrics. The guide also highlights potential enhancements and alternative solutions to address limitations and improve the system’s reliability and performance.

    Visit original content creator repository
    https://github.com/syedshameersarwar/MLOpsLifeCycle

  • naivecache

    NaiveCache: 简单易用的 Memcached Java 客户端。

    Language grade: Java

    使用要求

    Maven 配置

        <dependency>
            <groupId>com.heimuheimu</groupId>
            <artifactId>naivecache</artifactId>
            <version>1.1</version>
        </dependency>

    Log4J 配置

    # NaiveCache 错误信息根日志
    log4j.logger.com.heimuheimu.naivecache=WARN, NAIVECACHE
    log4j.additivity.com.heimuheimu.naivecache=false
    log4j.appender.NAIVECACHE=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.NAIVECACHE.file=${log.output.directory}/naivecache/naivecache.log
    log4j.appender.NAIVECACHE.encoding=UTF-8
    log4j.appender.NAIVECACHE.DatePattern=_yyyy-MM-dd
    log4j.appender.NAIVECACHE.layout=org.apache.log4j.PatternLayout
    log4j.appender.NAIVECACHE.layout.ConversionPattern=%d{ISO8601} %-5p [%F:%L] : %m%n
    
    # Memcached 连接信息日志
    log4j.logger.NAIVECACHE_MEMCACHED_CONNECTION_LOG=INFO, NAIVECACHE_MEMCACHED_CONNECTION_LOG
    log4j.additivity.NAIVECACHE_MEMCACHED_CONNECTION_LOG=false
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG.file=${log.output.directory}/naivecache/connection.log
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG.encoding=UTF-8
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG.DatePattern=_yyyy-MM-dd
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG.layout=org.apache.log4j.PatternLayout
    log4j.appender.NAIVECACHE_MEMCACHED_CONNECTION_LOG.layout.ConversionPattern=%d{ISO8601} %-5p : %m%n
    
    # Memcached 错误日志,只打印 Key 和 错误原因
    log4j.logger.NAIVECACHE_ERROR_LOG=INFO, NAIVECACHE_ERROR_LOG
    log4j.additivity.NAIVECACHE_ERROR_LOG=false
    log4j.appender.NAIVECACHE_ERROR_LOG=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.NAIVECACHE_ERROR_LOG.file=${log.output.directory}/naivecache/error.log
    log4j.appender.NAIVECACHE_ERROR_LOG.encoding=UTF-8
    log4j.appender.NAIVECACHE_ERROR_LOG.DatePattern=_yyyy-MM-dd
    log4j.appender.NAIVECACHE_ERROR_LOG.layout=org.apache.log4j.PatternLayout
    log4j.appender.NAIVECACHE_ERROR_LOG.layout.ConversionPattern=%d{ISO8601} : %m%n
    
    # Memcached 慢查日志,打印执行时间 > 50ms 的操作
    log4j.logger.NAIVECACHE_SLOW_EXECUTION_LOG=INFO, NAIVECACHE_SLOW_EXECUTION_LOG
    log4j.additivity.NAIVECACHE_SLOW_EXECUTION_LOG=false
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG.file=${log.output.directory}/naivecache/slow_execution.log
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG.encoding=UTF-8
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG.DatePattern=_yyyy-MM-dd
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG.layout=org.apache.log4j.PatternLayout
    log4j.appender.NAIVECACHE_SLOW_EXECUTION_LOG.layout.ConversionPattern=%d{ISO8601} : %m%n
    

    Memcached 客户端

    Spring 配置

    <util:list id=”falconDataCollectorList”> <bean class=”com.heimuheimu.naivecache.memcached.monitor.falcon.CompressionDataCollector”></bean> <bean class=”com.heimuheimu.naivecache.memcached.monitor.falcon.SocketDataCollector”></bean> <bean class=”com.heimuheimu.naivecache.memcached.monitor.falcon.ThreadPoolDataCollector”></bean> <bean class=”com.heimuheimu.naivecache.memcached.monitor.falcon.ExecutionDataCollector”></bean> </util:list> <bean id=”falconReporter” class=”com.heimuheimu.naivemonitor.falcon.FalconReporter” init-method=”init” destroy-method=”close”> <constructor-arg index=”0″ value=”http://127.0.0.1:1988/v1/push” /> <constructor-arg index=”1″ ref=”falconDataCollectorList” /> </bean>’>
        <!-- 监控数据采集器列表 -->
        <util:list id="falconDataCollectorList">
            <!-- Memcached 监控数据采集器 -->
            <bean class="com.heimuheimu.naivecache.memcached.monitor.falcon.CompressionDataCollector"></bean>
            <bean class="com.heimuheimu.naivecache.memcached.monitor.falcon.SocketDataCollector"></bean>
            <bean class="com.heimuheimu.naivecache.memcached.monitor.falcon.ThreadPoolDataCollector"></bean>
            <bean class="com.heimuheimu.naivecache.memcached.monitor.falcon.ExecutionDataCollector"></bean>
        </util:list>
        
        <!-- Falcon 监控数据上报器 -->
        <bean id="falconReporter" class="com.heimuheimu.naivemonitor.falcon.FalconReporter" init-method="init" destroy-method="close">
            <constructor-arg index="0" value="http://127.0.0.1:1988/v1/push" /> <!-- Falcon 监控数据推送地址 -->
            <constructor-arg index="1" ref="falconDataCollectorList" />
        </bean>

    Falcon 上报数据项说明(上报周期:30秒)

    Memcached 操作执行错误数据项:

    • naivecache_key_not_found/module=naivecache      30 秒内 Get 操作未找到 Key 的次数
    • naivecache_timeout/module=naivecache      30 秒内 Memcached 操作发生超时的错误次数
    • naivecache_error/module=naivecache      30 秒内 Memcached 操作发生异常的错误次数
    • naivecache_slow_execution/module=naivecache      30 秒内 Memcached 操作发生的慢执行次数

    Memcached 操作执行数据项:

    • naivecache_tps/module=naivecache      30 秒内每秒平均执行次数
    • naivecache_peak_tps/module=naivecache      30 秒内每秒最大执行次数
    • naivecache_avg_exec_time/module=naivecache      30 秒内单次 Memcached 操作平均执行时间
    • naivecache_max_exec_time/module=naivecache      30 秒内单次 Memcached 操作最大执行时间

    Memcached 操作 Socket 数据项:

    • naivecache_socket_read_bytes/module=naivecache      30 秒内 Socket 读取的总字节数
    • naivecache_socket_avg_read_bytes/module=naivecache      30 秒内 Socket 每次读取的平均字节数
    • naivecache_socket_written_bytes/module=naivecache      30 秒内 Socket 写入的总字节数
    • naivecache_socket_avg_written_bytes/module=naivecache      30 秒内 Socket 每次写入的平均字节数

    Memcached 操作线程池数据项:

    • naivecache_threadPool_rejected_count/module=naivecache      30 秒内所有线程池拒绝执行的任务总数
    • naivecache_threadPool_active_count/module=naivecache      采集时刻所有线程池活跃线程数近似值总和
    • naivecache_threadPool_pool_size/module=naivecache      采集时刻所有线程池线程数总和
    • naivecache_threadPool_peak_pool_size/module=naivecache      所有线程池出现过的最大线程数总和
    • naivecache_threadPool_core_pool_size/module=naivecache      所有线程池配置的核心线程数总和
    • naivecache_threadPool_maximum_pool_size/module=naivecache      所有线程池配置的最大线程数总和

    Memcached 操作压缩数据项:

    • naivecache_compression_reduce_bytes/module=naivecache      30 秒内压缩操作已节省的字节数
    • naivecache_compression_avg_reduce_bytes/module=naivecache      30 秒内平均每次压缩操作节省的字节数

    Memcached 示例代码

        public class DemoService {
            
            @Resource(name = "memcachedClusterClient")
            private NaiveMemcachedClient naiveMemcachedClient;
            
            public void test() {
                User alice = new User(); //需要存入 Memcached 中的 User 实例,必须是可序列化的(实现 Serializable 接口)
                
                naiveMemcachedClient.set("demo_user_alice", alice, 30); //将 alice 实例存入 Memcached 中,并设置过期时间为 30 秒
                
                User aliceFromCache = naiveMemcachedClient.get("demo_user_alice"); //从 Memcached 中将 alice 实例取回
                
                User lucy = new User(); //需要存入 Memcached 中的 User 实例,必须是可序列化的(实现 Serializable 接口)
                
                naiveMemcachedClient.set("demo_user_lucy", lucy, 60); //将 lucy 实例存入 Memcached 中,并设置过期时间为 60 秒
                
                naiveMemcachedClient.touch("demo_user_alice", 60); //将 alice 实例的缓存过期时间重新设为 60 秒
                
                //使用 multiGet 方法进行批量 Get,可显著提升性能
                Set<String> keySet = new HashSet<>(); //将需要批量获取的 Key 放入 Set 中
                keySet.add("demo_user_alice");
                keySet.add("demo_user_lucy");
                Map<String, User> userMap = naiveMemcachedClient.multiGet(keySet); //执行 Memcached 批量 Get 操作
                User aliceFromCacheMap = userMap.get("demo_user_alice"); //从结果 Map 中获得 alice 实例
                User lucyFromCacheMap = userMap.get("demo_user_lucy"); //从结果 Map 中获得 lucy 实例
                
                naiveMemcachedClient.delete("demo_user_alice"); //在 Memcached 中删除 alice 实例
                naiveMemcachedClient.delete("demo_user_lucy"); //在 Memcached 中删除 lucy 实例
            }
        }

    更多 Memcached 客户端

    在使用单台 Memcached 服务进行低频访问的场景下,可使用一次性 Memcached 客户端 OneTimeMemcachedClient:

    <bean id=”autoReconnectMemcachedClient” class=”com.heimuheimu.naivecache.memcached.advance.AutoReconnectMemcachedClient” destroy-method=”close”> <constructor-arg index=”0″ value=”127.0.0.1:11211″ /> </bean>’>
        <!-- 自动重连 Memcached 客户端配置 -->
        <bean id="autoReconnectMemcachedClient" class="com.heimuheimu.naivecache.memcached.advance.AutoReconnectMemcachedClient" destroy-method="close">
            <constructor-arg index="0" value="127.0.0.1:11211" /> <!-- Memcached 服务地址 -->
        </bean>

    支持热部署(动态增加/减少 Memcached 服务地址)的 Memcached 集群客户端 ReloadableMemcachedClusterClient 说明:ReloadableMemcachedClusterClient 客户端尚未在生产环境中进行验证。

    本地缓存客户端

    Spring 配置

    <util:list id=”falconDataCollectorList”> <bean class=”com.heimuheimu.naivecache.localcache.monitor.falcon.LocalCacheDataCollector”></bean> </util:list> <bean id=”falconReporter” class=”com.heimuheimu.naivemonitor.falcon.FalconReporter” init-method=”init” destroy-method=”close”> <constructor-arg index=”0″ value=”http://127.0.0.1:1988/v1/push” /> <constructor-arg index=”1″ ref=”falconDataCollectorList” /> </bean>’>
        <!-- 监控数据采集器列表 -->
        <util:list id="falconDataCollectorList">
            <!-- 本地缓存监控数据采集器 -->
            <bean class="com.heimuheimu.naivecache.localcache.monitor.falcon.LocalCacheDataCollector"></bean>
        </util:list>
        
        <!-- Falcon 监控数据上报器 -->
        <bean id="falconReporter" class="com.heimuheimu.naivemonitor.falcon.FalconReporter" init-method="init" destroy-method="close">
            <constructor-arg index="0" value="http://127.0.0.1:1988/v1/push" /> <!-- Falcon 监控数据推送地址 -->
            <constructor-arg index="1" ref="falconDataCollectorList" />
        </bean>

    Falcon 上报数据项说明(上报周期:30秒)

    • naivecache_local_error/module=naivecache      30 秒内本地缓存操作出现异常总次数
    • naivecache_local_query/module=naivecache      30 秒内本地缓存 get 操作总次数
    • naivecache_local_query_hit/module=naivecache      30 秒内本地缓存 get 操作命中总次数
    • naivecache_local_added/module=naivecache      30 秒内本地缓存新增 Key 的总数
    • naivecache_local_deleted/module=naivecache      30 秒内本地缓存删除 Key 的总数
    • naivecache_local_size/module=naivecache      当前本地缓存 Key 的总数

    Memcached 示例代码

        public class DemoService {
            
            @Autowired
            private NaiveLocalCacheClient naiveLocalCacheClient;
            
            public void test() {
                User alice = new User(); //需要存入本地缓存中的 User 实例,如果本地缓存开启了序列化模式,则 User 必须是可序列化的(实现 Serializable 接口)
                            
                naiveLocalCacheClient.set("demo_user_alice", alice, 30); //将 alice 实例存入本地缓存中,并设置过期时间为 30 秒
                
                User aliceFromCache = naiveLocalCacheClient.get("demo_user_alice"); //从本地缓存中将 alice 实例取回,如果本地缓存没有开启序列化模式,则不允许对该实例进行修改操作
                
                naiveLocalCacheClient.delete("demo_user_alice"); //在本地缓存中删除 alice 实例
            }
        }

    版本发布记录

    V1.1

    新增特性:

    • NaiveMemcachedClient#get(String) 方法可以直接获取由 NaiveMemcachedClient#addAndGet(String, long, long, int) 方法设置的值。

    V1.0

    特性:

    • 配置简单
    • Memcached 常用命令支持
    • 通过 Falcon 可快速实现 Memcached 数据监控
    • 通过钉钉实现 Memcached 服务故障实时报警
    • 集成一个简单高效的本地缓存实现

    更多信息

    Visit original content creator repository https://github.com/heimuheimu/naivecache
  • Trust-QR-Backend

    Trust QR Backend

    Trust QR is a platform that uses Blockchain technology and QR codes to combat counterfeiting. Companies can register their products on the platform, and each product will be assigned a unique QR code. Consumers can scan the QR code on a product to validate product authenticity, ensuring it matches the stated brand and providing manufacturing and expiry date details.

    About

    This project is based on the Blockchain Technology.

    Tech Stack

    We have used these following technologies

    • Frontend : Next.js, CSS
    • Backend : FastAPI
    • Blockchain : Ganache & Brownie
    • Testing : Pytest

    Cloning the project

    To clone the project, open a terminal and navigate to the directory where you want to clone the project. Then, run the following command :

    git clone https://github.com/Trust-QR/Trust-QR-Backend.git

    Installing Brownie

    python3 -m pip install --user pipx
    python3 -m pipx ensurepath

    To install Brownie using pipx :

    pipx install eth-brownie

    OR

    To install via pip :

    pip install eth-brownie

    For more info Brownie

    Installing FastAPI

    To install FastAPI, open a terminal and run the following command :

    pip install fastapi

    For more info FastApi

    Once you have installed Brownie and FastAPI you can start the backend by running the following command :

    brownie run scripts/server.py

    Run your Frontend :

    npm run dev

    Open http://localhost:3000 with your browser to see the result.

    Visit original content creator repository
    https://github.com/Trust-QR/Trust-QR-Backend