Category: Blog

  • khook

    0

    KHOOK (خوک) – Linux Kernel hooking engine.

    Usage

    Include Makefile.khook to your Makefile/Kbuild file:

    MODNAME      ?= your-module-name
    ...
    include      /path/to/khook/Makefile.khook
    ...
    $(MODNAME)-y += $(KHOOK_GOALS)
    ccflags-y    += $(KHOOK_CCFLAGS)
    ldflags-y    += $(KHOOK_LDFLAGS) # use LDFLAGS for old kernels
    ...
    

    Then, include KHOOK engine header like follows:

    #include <khook/engine.h>
    

    Use khook_init(lookup) and khook_cleanup() to initalize and de-initialize hooking engine.

    Use khook_lookup_name(sym) to resolve sym address.

    Use khook_write_kernel(fn, arg) to write kernel read-only data.

    Examples

    See the khook_demo folder for examples. Use make to build it.

    Hooking of generic kernel functions

    An example of hooking a kernel function with known prototype (function is defined in linux/fs.h):

    #include <linux/fs.h> // has inode_permission() proto
    KHOOK(inode_permission);
    static int khook_inode_permission(struct inode *inode, int mask)
    {
            int ret = 0;
            ret = KHOOK_ORIGIN(inode_permission, inode, mask);
            printk("%s(%p, %08x) = %d\n", __func__, inode, mask, ret);
            return ret;
    }
    

    An example of hooking a kernel function with custom prototype (function is not defined in linux/binfmts.h):

    #include <linux/binfmts.h> // has no load_elf_binary() proto
    KHOOK_EXT(int, load_elf_binary, struct linux_binprm *);
    static int khook_load_elf_binary(struct linux_binprm *bprm)
    {
            int ret = 0;
            ret = KHOOK_ORIGIN(load_elf_binary, bprm);
            printk("%s(%p) = %d (%s)\n", __func__, bprm, ret, bprm->filename);
            return ret;
    }
    

    Starting from a6e7f394 it’s possible to hook a function with big amount of arguments. This requires for KHOOK to make a local copy of N (hardcoded as 8) arguments which are passed through the stack before calling the handler function.

    An example of hooking 12 argument function scsi_execute is shown below (see #5 for details):

    
    #include <scsi/scsi_device.h>
    KHOOK(scsi_execute);
    static int khook_scsi_execute(struct scsi_device *sdev, const unsigned char *cmd, int data_direction, void *buffer, unsigned bufflen, unsigned char *sense, struct scsi_sense_hdr *sshdr, int timeout, int retries, u64 flags, req_flags_t rq_flags, int *resid)
    {
            int ret = 0;
            ret = KHOOK_ORIGIN(scsi_execute, sdev, cmd, data_direction, buffer, bufflen, sense, sshdr, timeout, retries, flags, rq_flags, resid);
            printk("%s(%lx, %lx, %lx, %lx, %lx, %lx, %lx, %lx, %lx, %lx, %lx, %lx) = %d\n", __func__, (long)sdev, (long)cmd, (long)data_direction, (long)buffer, (long)bufflen, (long)sense, (long)sshdr, (long)timeout, (long)retries, (long)flags, (long)rq_flags, (long)resid ,ret);
            return ret;
    }
    
    

    Starting from f996ce39 it’s possible to hook x86-32 kernels as correct trampoline has been implemented.

    Hooking of system calls (handler functions)

    An example of hooking kill(2) system call handler (see #3 for the details):

    // long sys_kill(pid_t pid, int sig)
    KHOOK_EXT(long, sys_kill, long, long);
    static long khook_sys_kill(long pid, long sig) {
            printk("sys_kill -- %s pid %ld sig %ld\n", current->comm, pid, sig);
            return KHOOK_ORIGIN(sys_kill, pid, sig);
    }
    
    // long sys_kill(const struct pt_regs *regs) -- modern kernels
    KHOOK_EXT(long, __x64_sys_kill, const struct pt_regs *);
    static long khook___x64_sys_kill(const struct pt_regs *regs) {
            printk("sys_kill -- %s pid %ld sig %ld\n", current->comm, regs->di, regs->si);
            return KHOOK_ORIGIN(__x64_sys_kill, regs);
    }
    

    Features

    • x86 only
    • 2.6.33+ kernels
    • use of in-kernel length disassembler
    • ready-to-use submodule with no external deps

    How it works?

    The diagram below illustrates the call to function X without hooking:

    CALLER
    | ...
    | CALL X -(1)---> X
    | ...  <----.     | ...
    ` RET       |     ` RET -.
                `--------(2)-'
    

    The diagram below illustrates the call to function X when KHOOK is used:

    CALLER
    | ...
    | CALL X -(1)---> X
    | ...  <----.     | JUMP -(2)----> khook_X_stub
    ` RET       |     | ???            | INCR use_count
                |     | ...  <----.    | CALL handler   -(3)----> khook_X
                |     | ...       |    | DECR use_count <----.    | ...
                |     ` RET -.    |    ` RET -.              |    | CALL origin -(4)----> khook_X_orig
                |            |    |           |              |    | ...  <----.           | N bytes of X
                |            |    |           |              |    ` RET -.    |           ` JMP X + N -.
                `------------|----|-------(8)-'              '-------(7)-'    |                        |
                             |    `-------------------------------------------|--------------------(5)-'
                             `-(6)--------------------------------------------'
    

    License

    This software is licensed under the GPL.

    Author

    Ilya V. Matveychikov

    2018, 2019, 2020, 2022, 2023, 2024

    Visit original content creator repository
    https://github.com/milabs/khook

  • frontend-integration-devtalk

    Demo application

    The application use webcomponents (Custom Elements + HTML Imports) to compose pages out of main (domain) content and different distributed fragments (header, footer) on the client (browser).
    The different domain pages are connected through simple hyperlinks.

    NOTE: The application is functional in all major browser (pollyfills). Nevertheless there is a short flickering of the transcluded fragments (header, footer) after a GET request in browsers other then chrome (native webcomponents implementation).
    If it is not possible to work around this flickering, it should be discussed if webcomponents are the right approach for transclusion (at least for the common fragments that are visible across domain pages in the same place).



    client

    Structure

    • ui-proxy acts as reverse proxy and route requests depending on the context path to the correct service (using container linking).
    • layout-service delivers common fragments (header, footer), global styles and custom webcomponents.
    • home-ui delivers the ‘home’ domain page (static HTML).
    • mail-ui delivers the ‘mail’ domain page (ember.js).
    • presentation contains the dev-talk presentation (Slides).



    ARCH

    Run the demo application

    Requirements:
    Local docker(-compose) installation

    docker-compose up

    The application is then accessible under http://localhost.

    Run the presentation

    Requirements:
    Local npm installation

    cd presentation/
    npm install
    npm start

    The presentation is then accessible under http://localhost:8000.
    NOTE: You can view the presentation notes by hitting s in the browser.

    Resources

    Microservice/SCS UI composition

    Tech

    Visit original content creator repository https://github.com/lgraf/frontend-integration-devtalk
  • github-commit-watcher

    Build Status

    Official documentation here.

    gicowa.py – GitHub Commit Watcher

    GitHub’s Watch feature doesn’t send notifications when commits are pushed. This script aims to implement this feature and much more.

    Call for maintainers: I don’t use this project myself anymore but IFTTT instead (see below). If you’re interested in taking over the maintenance of this project, or just helping, please let me know (e.g. by opening an issue).

    Installation

    $ sudo apt-get install sendmail
    $ sudo pip install gicowa
    

    Quick setup

    Add the following line to your /etc/crontab:

    0 * * * * root gicowa --persist --no-color --mailto myself@mydomain.com lastwatchedcommits MyGitHubUsername sincelast > /tmp/gicowa 2>&1
    

    That’s it. As long as your machine is running you’ll get e-mails when something gets pushed on a repo you’re watching.

    NOTES:

    • The e-mails are likely to be considered as spam until you mark one as non-spam in your e-mail client. Or use the --mailfrom option.
    • If you’re watching 15 repos or more, you probably want to use the --credentials option to make sure you don’t hit the GitHub API rate limit.

    Other/Advanced usage

    gicowa is a generic command-line tool with which you can make much more that just implementing the use case depicted in the introduction. This section shows what it can.

    List repos watched by a user

    $ gicowa watchlist AurelienLourot
    watchlist AurelienLourot
    brandon-rhodes/uncommitted
    AurelienLourot/crouton-emacs-conf
    brillout/FasterWeb
    AurelienLourot/github-commit-watcher
    

    List last commits on a repo

    $ gicowa lastrepocommits AurelienLourot/github-commit-watcher since 2015 07 05 09 12 00
    lastrepocommits AurelienLourot/github-commit-watcher since 2015-07-05 09:12:00
    Last commit pushed on 2015-07-05 10:48:58
    Committed on 2015-07-05 10:46:27 - Aurelien Lourot - Minor cleanup.
    Committed on 2015-07-05 09:39:01 - Aurelien Lourot - watchlist command implemented.
    Committed on 2015-07-05 09:12:00 - Aurelien Lourot - argparse added.
    

    NOTES:

    • Keep in mind that a commit’s committer timestamp isn’t the time at which it gets pushed.
    • The lines starting with Committed on list commits on the master branch only. Their timestamps are the committer timestamps.
    • The line starting with Last commit pushed on shows the time at which a commit got pushed on the repository for the last time on any branch.

    List last commits on repos watched by a user

    $ gicowa lastwatchedcommits AurelienLourot since 2015 07 04 00 00 00
    lastwatchedcommits AurelienLourot since 2015-07-04 00:00:00
    AurelienLourot/crouton-emacs-conf - Last commit pushed on 2015-07-04 17:10:18
    AurelienLourot/crouton-emacs-conf - Committed on 2015-07-04 17:08:48 - Aurelien Lourot - Support for Del key.
    brillout/FasterWeb - Last commit pushed on 2015-07-04 16:40:54
    brillout/FasterWeb - Committed on 2015-07-04 16:38:55 - brillout - add README
    AurelienLourot/github-commit-watcher - Last commit pushed on 2015-07-05 10:48:58
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 10:46:27 - Aurelien Lourot - Minor cleanup.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:39:01 - Aurelien Lourot - watchlist command implemented.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:12:00 - Aurelien Lourot - argparse added.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:07:14 - AurelienLourot - Initial commit
    

    NOTE: if you’re watching 15 repos or more, you probably want to use the --credentials option to make sure you don’t hit the GitHub API rate limit.

    List last commits since last run

    Any listing command taking a since <timestamp> argument takes also a sincelast one. It will then use the time where that same command has been run for the last time on that machine with the option --persist. This option makes gicowa remember the last execution time of each command in ~/.gicowa.

    $ gicowa --persist lastwatchedcommits AurelienLourot sincelast
    lastwatchedcommits AurelienLourot since 2015-07-05 20:17:46
    $ gicowa --persist lastwatchedcommits AurelienLourot sincelast
    lastwatchedcommits AurelienLourot since 2015-07-05 20:25:33
    

    Send output by e-mail

    You can send the output of any command to yourself by e-mail:

    $ gicowa --no-color --mailto myself@mydomain.com lastwatchedcommits AurelienLourot since 2015 07 04 00 00 00
    lastwatchedcommits AurelienLourot since 2015-07-04 00:00:00
    AurelienLourot/crouton-emacs-conf - Last commit pushed on 2015-07-04 17:10:18
    AurelienLourot/crouton-emacs-conf - Committed on 2015-07-04 17:08:48 - Aurelien Lourot - Support for Del key.
    brillout/FasterWeb - Last commit pushed on 2015-07-04 16:40:54
    brillout/FasterWeb - Committed on 2015-07-04 16:38:55 - brillout - add README
    AurelienLourot/github-commit-watcher - Last commit pushed on 2015-07-05 10:48:58
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 10:46:27 - Aurelien Lourot - Minor cleanup.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:39:01 - Aurelien Lourot - watchlist command implemented.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:12:00 - Aurelien Lourot - argparse added.
    AurelienLourot/github-commit-watcher - Committed on 2015-07-05 09:07:14 - AurelienLourot - Initial commit
    Sent by e-mail to myself@mydomain.com
    

    NOTES:

    • You probably want to use --no-color because your e-mail client is likely not to render the bash color escape sequences properly.
    • The e-mails are likely to be considered as spam until you mark one as non-spam in your e-mail client. Or use the --mailfrom option.

    Changelog

    1.2.3 (2015-10-17) to 1.2.5 (2015-10-19):

    • Exception on non-ASCII characters fixed.

    1.2.2 (2015-10-12):

    • Machine name appended to e-mail content.

    1.2.1 (2015-08-20):

    • Documentation improved.

    1.2.0 (2015-08-20):

    • --version option implemented.

    1.1.0 (2015-08-20):

    • --errorto option implemented.

    1.0.1 (2015-08-18) to 1.0.9 (2015-08-19):

    • Documentation improved.

    Contributors

    Similar projects

    The following projects provide similar functionalities:

    • IFTTT, see this post.
    • Zapier, however you have to create a “Zap” for each single project you want to watch. See this thread.
    • HubNotify, however you will be notified only for new tags, not new commits.
    Visit original content creator repository https://github.com/lourot/github-commit-watcher
  • nvim-local-fennel

    nvim-local-fennel

    This has been superseded! I now recommend installing Olical/nfnl and enabling the exrc option in order to have directory local Neovim configuration in Fennel.

    Once you have nfnl and a .nfnl.fnl file at the root of your project you can write to .nvim.fnl and have .nvim.lua compiled for you automatically. This file is loaded by native Neovim with zero plugins provided you have :set exrc enabled.

    This means even colleagues that don’t have nfnl installed can use your directory local configuration. Consider this repo as essentially archived and superseded by this much smoother approach.

    Run Fennel inside Neovim on startup with Aniseed.

    Add some Fennel code such as (print "Hello, World!") to a file named .lnvim.fnl in your current directory or anywhere above it such as your home directory. A file will be created beside the .fnl called .lnvim.lua which will be executed upon startup. Files higher up in your directory hierarchy, such as the home directory, will be executed before those found lower down, such as in a project.

    Be sure to git ignore .lnvim.fnl and .lnvim.lua if you don’t want to share your local configuration with others. If you do want to share a .lnvim.fnl I’d recommend you ignore the .lua file to prevent duplicated changes in git commits.

    Aniseed will only re-compile the Fennel code if it’s changed since last time you opened Neovim. If you delete the .lnvim.fnl file then the .lnvim.lua file will be deleted automatically next time you launch Neovim to ensure you don’t accidentally leave Lua files laying around.

    Installation

    If you want interactive evaluation of the forms in your .lnvim.fnl file you can install Conjure too.

    use 'Olical/nvim-local-fennel'
    use 'Olical/aniseed'

    Plug 'Olical/nvim-local-fennel'
    Plug 'Olical/aniseed'

    Access to Aniseed

    Aniseed is embedded under the nvim-local-fennel.aniseed.* module prefix, this means you can use Aniseed’s macros and functions in your .lnvim.fnl files!

    ;; .lnvim.fnl
    ;; You can give the module any name you want.
    (module my-local-fennel
      {autoload {a nvim-local-fennel.aniseed.core
                 str nvim-local-fennel.aniseed.string
                 nvim nvim-local-fennel.aniseed.nvim}})
    
    ;; A hyphen suffix denotes a private function.
    (defn- do-some-things [numbers]
      (a.println
        (nvim.fn.getcwd)
        (a.map a.inc numbers)
        {:Hello :Fennel!}))
    
    ;; Public value.
    ;; You could require this module and access it.
    (def counting [1 2 3])
    
    ;; Executed as the file is loaded.
    (do-some-things counting)

    Unlicenced

    Find the full unlicense in the UNLICENSE file, but here’s a snippet.

    This is free and unencumbered software released into the public domain.

    Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.

    Visit original content creator repository
    https://github.com/Olical/nvim-local-fennel

  • kinesis-flink-hudi-benchmark

    Kinesis Flink App Hudi Benchmark

    Contributors

    This repository has been developed primarily by @ajaen4, @adrij and @alfonjerezi.

    Introduction

    This project deploys an architecture in AWS which ingest and processes streaming data with Kinesis Flink Application and writes the output to S3 in Hudi and JSON format.

    Architecture

    Alt text

    Documentation

    Articles:

    Requirements

    • You must own an AWS account and have an Access Key to be able to authenticate. You need this so every script or deployment is done with the correct credentials. See here steps to configure your credentials.

    • Versions:

      • Terraform = 1.1.7
      • terraform-docs = 0.16.0
      • hashicorp/aws = 4.54.0
      • Python = 3.8

    Infrastructure deployed

    This code will deploy the following infraestructure inside AWS:

    • 3 Kinesis Flink Applications
    • 1 Kinesis Data Streams
    • 3 S3 bucket
      • Deployment bucket
      • JSON data bucket
      • Hudi COW data bucket
      • Hudi MOR data bucket
    • 1 EKS Cluster
    • 1 Locust app deployed in the EKS Cluster
    • 3 Monitoring Lambdas (1 per output type)

    Installation

    Follow the instructions here to install terraform

    Follow the instructions here to install the AWS CLI

    Follow the instructions here to install Python 3.8

    In order to run any python code locally you will need to create a virtual env first and install all requirements:

    python3 -m venv .venv
    source .venv/bin/activate

    In the case of using Microsoft:

    python3 -m venv .venv
    .\env\Scripts\activate

    And to install all required packages:

    make install

    bucket and DynamoDB for terraform state deployment

    This small infra deployment is to be able to use remote state with Terraform. See more info about remote state here. Commands:

    cd infra/bootstraper-terraform
    terraform init
    terraform <plan/apply/destroy> -var-file=vars/bootstraper.tfvars
    
    # Example
    cd infra/bootstraper-terraform
    terraform init
    terraform apply -var-file=vars/bootstraper.tfvars

    It is important that you choose wisely the variables declared in the “bootstraper-terrafom/vars/bootstraper.tfvars” file because the bucket name is formed using these.

    There will be an output printed on the terminal’s screen, this could be an example:

    state_bucket_name = "eu-west-1-bluetab-cm-vpc-tfstate"

    Please copy it, we will be using it in the next chapter.

    Infrastructure deployment

    Instance types

    Important: we have set some big and expensive instances in the vars/flink-hudi.tfvars variables file. We recommend you set this variables appropiately to not incurr in excessive costs.

    Commands

    To be able to deploy the infrastructure it’s necessary to fill in the variables file (“vars/flink-hudi.tfvars”) and the backend config for the remote state (“terraform.tf”).

    To deploy, the following commands must be run:

    terraform <plan/apply/destroy> -var-file=vars/flink-hudi.tfvars

    We will use the value copied in the previous chapter, the state bucket name, to substitute the <OUTPUT_FROM_BOOTSTRAPER_TERRAFORM> value in the infra/backend.tf file. You will need docker and the docker daemon running in order to perform the deployment.

    Sending events with Locust

    Locally

    Once deployed, you can make use of the provided Locust application to send events to the Kinesis Stream. Just make sure that the environment variables are properly configured in event_generation/.env (add AWS_PROFILE if you want to use a different one from the default) and run:

    make send-records

    A Locust process will start and you can access its UI in http://0.0.0.0:8089/. You can modify number of users and rate, but the defaults will suffice for testing the application.

    From the Locust EKS app

    After deploying all the infrastructure you will see an output called load_balancer_dns. Its value is an URL, copy and paste it in your web browser to see the Locust interface. Choose the configuration for your loadtest and click “Start swarming”. You will start to receive events inmediatly to the designated kinesis stream!.

    Application details

    Some dependencies are needed for the Flink application to work properly which entail some explanation

    • flink-sql-connector-kinesis – Fundamental connector for our Flink application to be able to read from a Kinesis Stream.
    • flink-s3-fs-hadoop – Allows the application to operate on top of S3.
    • hudi-flink1.15-bundle – Package provided by Hudi developers, with all the necessary dependencies to work with the technology.
    • hadoop-mapreduce-client-core – Additional dependency required for writing to Hudi to work correctly in KDA. It is possible that in future versions of the Hudi Bundle this dependency will not be needed.
    • aws-java-sdk-glue, hive-common, hive-exec – Necessary dependencies for the integration between Hudi and AWS Glue Catalog

    License

    MIT License – Copyright (c) 2023 The kinesis-flink-hudi-benchmark Authors.

    Visit original content creator repository https://github.com/ajaen4/kinesis-flink-hudi-benchmark
  • ghqc-en

    Visit original content creator repository
    https://github.com/Demomaker/ghqc-en

  • Honey-Bee-Analysis

    Visit original content creator repository
    https://github.com/MattMiniat/Honey-Bee-Analysis

  • frost

    Frost Logo

    FROST is an open source, cold storage wallet for IOTA. It runs offline, can be compiled from source, and requires minimal dependencies.

    Launch Article: Read it on Medium.

    OFFLINE USE ONLY: FROST REQUIRES YOU TO DISCONNECT FROM THE INTERNET!

    Give Frost A Try!

    Short on time? Not storing a lot? Run the cold storage wallet directly in your browser. http://frostwallet.info/

    Run Latest Build

    The frost repository includes the latest build. You can run it conveniently without having to build it from source.

    1. Clone the repo from GitHub.
    git clone https://github.com/zachalam/frost.git
    
    1. Open frost/build/index.html in your default browser.
    open frost/build/index.html
    

    Build From Source

    If you’re super paranoid (completely justified). You can fully examine AND build frost from source. Here’s a quick guide that’ll get you up and running locally.

    You will need to have the latest version of Node/NPM installed.

    1. Clone the repo from GitHub.
    git clone https://github.com/zachalam/frost.git
    
    1. Enter frost directory and build from source.
    cd frost
    npm run build
    
    1. Open build/index.html in your default browser.
    open build/index.html
    

    That’s all! Here’s it all together:

    git clone https://github.com/zachalam/frost.git
    cd frost
    npm run build
    open build/index.html
    

    Keep Coins Secure

    • ALWAYS disconnect from the Internet when generating/accessing a key.
    • DO NOT share your encrypted wallet OR seed with anyone.
    • DO NOT SEND FUNDS FROM YOUR ADDRESS MORE THAN ONCE!!!

    Contributing

    Pull requests are more than welcome.

    License

    This project is licensed under the MIT License – see the LICENSE.md file for details

    Visit original content creator repository https://github.com/zachalam/frost
  • hm_caldav

    CalDav integration for HomeMatic – hm_caldav

    Release Downloads Issues License Donate

    This CCU-Addon reads an ics file from the given url. In the configuration you can define which meeting are represented as system variables within the HomeMatic CCU environment. If a defined meeting is running this is represented by the value of the corresponding system variable. Additionally there are variables -TODAY and -TOMORROW which are set to active if a meeting is planned today or tommorow, even if the meeting only last for e.g. an hour.

    Important: This addon is based on wget. On your CCU there might be an outdated version of wget, which might not support TLS 1.1 or TLS 1.2.

    Supported CCU models

    Installation as CCU Addon

    1. Download of recent Addon-Release from Github
    2. Installation of Addon archive (hm_caldav-X.X.tar.gz) via WebUI interface of CCU device
    3. Configuration of Addon using the WebUI accessible config pages

    Manual Installation as stand-alone script (e.g. on RaspberryPi)

    1. Create a new directory for hm_caldav:

       mkdir /opt/hm_caldav
      
    2. Change to new directory:

       cd /opt/hm_caldav
      
    3. Download latest hm_caldav.sh:

       wget https://github.com/H2CK/hm_caldav/raw/master/hm_caldav.sh
      
    4. Download of sample config:

       wget https://github.com/H2CK/hm_caldav/raw/master/hm_caldav.conf.sample
      
    5. Rename sample config to active one:

       mv hm_caldav.conf.sample hm_caldav.conf
      
    6. Modify configuration according to comments in config file:

       vim hm_caldav.conf
      
    7. Execute hm_caldav manually:

       /opt/hm_caldav/hm_caldav.sh
      
    8. If you want to automatically start hm_caldav on system startup a startup script

    Using ‘system.Exec()’

    Instead of automatically calling hm_caldav on a predefined interval one can also trigger its execution using the system.Exec() command within HomeMatic scripts on the CCU following the following syntax:

        system.Exec("/usr/local/addons/hm_caldav/run.sh <iterations> <waittime> &");
    

    Please note the <iterations> and <waittime> which allows to additionally specify how many times hm_caldav should be executed with a certain amount of wait time in between. One example of such an execution can be:

        system.Exec("/usr/local/addons/hm_caldav/run.sh 5 2 &");
    

    This will execute hm_caldav for a total amount of 5 times with a waittime of 2 seconds between each execution.

    Support

    In case of problems/bugs or if you have any feature requests please feel free to open a new ticket at the Github project pages.

    License

    The use and development of this addon is based on version 3 of the LGPL open source license.

    Authors

    Copyright (c) 2018-2021 Thorsten Jagel <dev@jagel.net>

    Notice

    This Addon uses KnowHow that was developed throughout the following projects:

    Visit original content creator repository https://github.com/H2CK/hm_caldav
  • nuphy-linux

    nuphy-linux

    Fix browser (VIA) not able to connect to nuphy

    Nuphy udev rules

    Welcome to the GitHub repository for the Nuphy udev rules! This repository contains the essential udev rules needed to ensure compatibility and proper permissions for nuphy hardware. These rules are particularly designed to work seamlessly with the VIA web application at usevia.app.

    Overview

    Udev is a device manager for the Linux kernel, which dynamically creates or removes device nodes in the /dev directory. For the nuphy keyboards, specific udev rules are required to set the correct permissions, allowing applications like usevia.app to interact with them without needing root privileges.

    This repository provides the necessary udev rules to facilitate this interaction, ensuring a smooth and secure experience for users of Nupy Keyboards on Linux systems.

    I cannot gurantee or verify if all devices will work because I do not own all of them (Why should I?!). But using the json files from the offficial site I do have the IDs they used. So should work right?
    In the worst case open an issue and let me know.

    Supported devices

    Tested:

    • Nuphy Air96 v2
    • Nuphy Air60 v2
    • Nuphy Kick75

    Untested (Should work I dont have one tho):

    • Nuphy Air75 v2
    • NuPhy Gem80
    • NuPhy Halo75
    • NuPhy Halo96
    • NuPhy Nos75

    Tested by contributors:

    • NuPhy Halo65 HE (IcarusSosie)

    Installation

    Prerequisites

    • A Linux-based operating system.
    • A supported Nuphy device.

    Steps

    1. Clone the Repository:

      git clone https://github.com/Z3R0-CDS/nuphy-linux
    2. Navigate to the Repository:

      cd nuphy-linux
    3. Install the Udev Rule:

      Manual

      sudo cp nuphy.rules /etc/udev/rules.d/
      sudo udevadm control --reload-rules && sudo udevadm trigger

      Automated

      ./install_rules.sh
    4. Verify Installation:
      Connect your Nuphy device and verify if it’s detected correctly by the via application.
      Make sure to follow the guide of the Official website to ensure its working as intended.
      Also keep in mind it might be required to reopen the browser or try in a private tab if you attempted to use via before.
      I had some cached issues with the permissions at first and testing in a private tab helped.

    Usage

    Once installed, the udev rules will automatically set the correct permissions for your Nuphy Device.
    This allows the via web application to detect and interact with your device without requiring additional configurations.


    For more information or support, open an issue in this repository. I will try to respond asap

    Create a rule yourself.

    Commands

    1. Get a list of usb devices.
      lsusb
    2. Get data of your device.

      Bus 005 Device 007: ID 19f5:3265 NuPhy NuPhy Air96 V2 <- Example output for my nuphy
                                 ^ 
                               These are the vendor ID and device ID
      Vendor will be 19f5 because nuphy is nuphy.
      
    3. Create a rule.

       <notepad app(kate)> nuphy-<something>.rules
       Enter the rules:
       SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ATTR{idVendor}=="<vendorID>", ATTR{idProduct}=="<deviceID>", MODE="0666"
       
       KERNEL=="hidraw*", ATTRS{idVendor}=="<vendorID>", ATTRS{idProduct}=="<deviceID>", MODE="0666"
      
    4. Create a rule.
      Then just copy and apply as above.
      Also sharing is caring so open a merge request.

    Visit original content creator repository
    https://github.com/Z3R0-CDS/nuphy-linux