Blog

  • PiHole-Dark

    PiHole-Dark

    A Dark theme for the PiHole admin page. (Now FTL compatible)

    Example dashboard


    Easy install

    1. open a terminal (or SSH if you’re remoting in) ( Ctrl + Alt + T )
    2. Type: cd /var/www/html/
    3. Type: sudo wget https://raw.githubusercontent.com/lkd70/PiHole-Dark/master/install.sh
    4. Type: sudo chmod +x install.sh
    5. Type: sudo ./install.sh
    6. Follow the onscreen instructions!

    Easy Uninstall

    Not a fan of the style? Sorry to hear, please feel free to leave some feedback on the Official Thread. Sadly this theme isn’t for everyone, so here’s a simple method to restore the origional beauty of PiHole:

    1. open a terminal (or SSH if you’re remoting in) ( Ctrl + Alt + T )
    2. Type: cd /var/www/html/admin/
    3. Type git reset --hard
    4. Go leave some feedback so we know what you didn’t like!

    Suggestion?

    Something you’d like to see? Or maybe there’s an incompatibility with a newer version of pi-hole? Please open an issue or make a pull request!


    FAQ

    Q: Yuck! Why is the background still white??!? How could you do this to me?

    First of all, I’d like to say sorry for your eyes, I’m sure they hurt the way mine did upon first seeing that sight.
    Let’s look at two possible solutions to this.

    1. Create your own pretty background image! (or grabbing one from a background tile/pattern website)

    You can easily replace the ugly image in /var/www/html/admin/img/boxed-bg.jpg with a pretty dark one.

    2. To hell with the box! View the website without the 4:3 boxed view.

    You can change to a wide-view from the admin website by navigating to: Settings -> API / Web Interface -> Web interface settings and un-checking the Use boxed layout option.

    Q: I’ve installed the theme and nothing has changed? Is this a scam? I want my money back!

    Refunds cost $1 each, please contact me at totallylegitemail@contact.com

    (Have you tried refreshing your browsers cache? Chances are it’s still grabbing the old files)


    Credits

    All credits to the original creators of these files, I am merely an editor.
    All credits to the Pi-Hole Team for their great platform.

    Do follow the individual licenses of these files as supplied by the authors.

    Inspired by this thread.

    Thanks to deathbybandaid for the new easy install method!

    Visit original content creator repository

  • graphsurge

    Graphsurge: Graph Analytics on View Collections using Differential Computations

    Continuous integration

    Graphsurge is a new system for performing analytical computations on multiple snapshots or views of large-scale static property graphs. Graphsurge allows users to create view collections, a set of related views of a graph created by applying filter predicates on node and edge properties, and run analytical computations on all the views of a collection efficiently.

    Graphsurge is built on top of Timely Dataflow and Differential Dataflow, which provides two huge benefits:

    • Differential Dataflow can incrementally maintain the results for any general computation, including cyclic or iterative computations (which include many graph algorithms such as Connected Components. Analytical computations in Graphsurge are expressed using Differential operators and enables reusing computation results across the views of a collection instead of running computations from scratch on each view. This results in huge savings on the total runtime.
    • We use the Timely execution engine to seamlessly scale both the materialization of view collections and running analytical computations to a distributed environment, similar to using other execution frameworks such as Spark or Flink.

    Graphsurge stores view collections using a form of delta encoding, where the data for a view GVi represent its difference with the previous view GVi-1. This representation can also be directly used as inputs to Differential Dataflow computations.

    In general, the runtime of a black-box differential computation (such as the user-defined computations in Graphsurge) is correlated with the total number of diffs of a view collection. Graphsurge enables 2 key optimizations based on this observation:

    • Collection Ordering: The total number of diffs of a view collection depends on the order the views (similar views placed next to each other will generate less diffs) and we want to reorder a given set of views to get the lowest number of diffs. This Collection Ordering Problem is related to the Consecutive Block Minimization Problem, which is NP-Hard! Graphsurge solves this problem using a constant-factor approximation algorithm (resulting in up to 10x less diffs in our experiments).

    • Adaptive Collection Splitting: Maintaining computation results unsurprisingly implies an overhead for Differential Dataflow, as it needs to check the entire history of a key to determine the effect of a new update. This overhead is especially large for cases where the number of diffs of a view are high, or for computations (like PageRank) which results in a large number of output changes even for small number of input updates. In such cases, it is faster to run the computation on a view from scratch instead of trying to reuse results from previous views.

      Graphsurge keeps track of the correlation between the number of the diffs and the actual computation time when running differentially and also when rerunning from scratch. It uses a linear regression model to adaptively decide at runtime to split the view collection at the point where rerunning from scratch is predicted to be faster than to continue running differentially.

    More details on our techniques and experimental results can be found in our paper.

    Using Graphsurge

    Graphsurge is written in Rust. To run the Graphsurge cli, download and build the binary:

    $ git clone https://github.com/dsg-uwaterloo/graphsurge && cd graphsurge
    $ cargo build --release
    $ ./target/bin/graphsurge

    Set the number of worker threads and process id:

    graphsurge> SET THREADS 4 AND PROCESS_ID 0;

    Load a graph:

    graphsurge> LOAD GRAPH WITH
        VERTICES FROM 'data/small_properties/vertices.txt' and
        EDGES FROM 'data/small_properties/edges.txt'
        COMMENT '#';

    Create a view collection:

    graphsurge> CREATE VIEW COLLECTION Years WHERE
        [year <= 2000 and u.country = 'canada' and v.country = 'canada'],
        [year <= 2005 and u.country = 'canada' and v.country = 'canada'],
        [year <= 2010 and u.country = 'canada' and v.country = 'canada'];

    Run computations:

    $ mkdir bfs_results
    graphsurge> RUN COMPUTATION wcc ON COLLECTION Years SAVE RESULTS TO 'bfs_results';

    Running in a distributed environment:

    To run Graphsurge on multiple machines, say on 2 hosts server1 and server2, start Graphsurge and set the process ids:

    # On server1
    graphsurge> SET THREADS 32 AND PROCESS_ID 0;
    # On server2
    graphsurge> SET THREADS 32 AND PROCESS_ID 1;

    Then run the same queries on both of them. Make sure that server1 and server2 can access each other at the specified port.

    graphsurge> LOAD GRAPH WITH
        VERTICES FROM 'data/small_properties/vertices.txt' and
        EDGES FROM 'data/small_properties/edges.txt'
        COMMENT '#';
    graphsurge> CREATE VIEW COLLECTION Years WHERE
        [year <= 2000 and u.country = 'canada' and v.country = 'canada'],
        [year <= 2005 and u.country = 'canada' and v.country = 'canada'],
        [year <= 2010 and u.country = 'canada' and v.country = 'canada']
        HOSTS 'server1:9000' 'server2:9000';
    graphsurge> RUN ARRANGED_DIFFERENTIAL COMPUTATION wcc on COLLECTION Years
        HOSTS 'server1:9000' 'server2:9000';

    The same process can be repeated for additional hosts machines.

    Writing new computations:

    Graphsurge already has implementations for a set of common graph algorithms. New computations can be written using the Analytics Computation API. You can see examples of how to use the API for bfs and scc.

    Check the experiments folder for examples on how to use Graphsurge.

    Visit original content creator repository
  • fs2-mqtt

    A pure functional Scala MQTT client librarycodecov

    Back then…

    Late 2014, I initiated an MQTT client library for Scala side project.
    My purpose was to learn me some Akka while trying to deliver something potentially useful. I quickly
    found the excellent scodec library to encode/decode MQTT protocol frames, making
    this part of the work considerably easier, with a concise and very readable outcome.

    More recently, while getting more and more interest in pure functional programming in Scala, in had the chance to see
    this amazing talk on Skunk from @tpolecat. It’s about
    building, from the ground up, a data access library for Postgres based on FS2 and… scodec.

    Oops!… I did it again.

    I rushed to Skunk, which as been inspirational, and took the opportunity of the
    lock-down days, around april 2020, to learn a lot about cats, cats effect
    and of course FS2. I even found the integration between FS2 and scodec, scodec-stream,
    to be utterly useful.

    With all these tools at hand, and the book Practical-FP in Scala: A hands-on approach
    on my desk, it has been quite (sic) easy to connect everything together and build this purely functional Scala MQTT
    client library.

    Current status

    This library is build in the tagless final style in order to make it, as much as possible, IO monad agnostic for the
    client code using it. It’s internals are nevertheless mainly build around FS2, cats effect typeclasses and concurrency
    primitives.

    It implements almost all the MQTT 3.1.1 protocol, managing (re)connections and in flight messages, and allows interacting
    with a Mosquitto broker. It does not support MQTT 5 and, to tell the truth, this is not even envisioned!
    Still, there’s work ahead:

    • finer grained configuration (e. g. number of in flight messages)
    • a proper documentation
    • shame on me, far more test coverage /o\
    • performance evaluation
    • trying to consider Scala.js,

    For examples on how to use it, you can have a look at the local subscriber or the local publisher
    code. Both are based on Cats Effect IO.

    Releases

    Artifacts are available at Sonatype OSS Repository Hosting service, even the SNAPSHOTS are automatically
    built by GitHub Actions. To include the Sonatype repositories in your SBT build you should add,

    resolvers ++= Seq(
        Resolver.sonatypeRepo("releases"),
        Resolver.sonatypeRepo("snapshots")
    )

    In case you want to easily give a try to this library, without the burden of adding resolvers, there is a release synced
    to Maven Central. In this case just add,

    scalaVersion := "3.3.1"
    
    libraryDependencies ++= Seq(
        "net.sigusr" %% "fs2-mqtt" % "1.0.1"
    )

    Dependencies

    Roughly speaking this library depends on:

    It should work seamlessly with various compatible IO monads: Cats Effect IO
    of course and ZIO as it supports cats effect typeclasses. Monix used to be supported as well, but
    since version 1.0.0 and the migration to cats effect3.x, this is no more the case.

    License

    This work is licenced under an Apache Version 2.0 license

    Visit original content creator repository

  • lmms-finetune

    Enabling the finetuning of the latest Large Multimodal Models

    Active maintainer: Yuqian Hong

    Initial maintainers: Jingyang Zhang, Yueqian Lin

    About

    More and more large multimodal models (LMMs) are being released from time to time, but the finetuning of these models is not always straightforward. This codebase aims to provide a unified, minimal structure for LMM finetuning. Key design ideas include:

    • the components of the finetuning process (e.g., model loading, data collating) are abstracted, allowing one to easily integrate the latest LMMs into this codebase and finetune them with minimal effort;
    • for all LMMs the 🤗huggingface’s official implementation is used, so that after finetuning one can do inference and everything else in the exact same way as earlier with the HF model;
    • the codebase is kept as simple/lightweight as possible, so that it is easy to understand and modify.

    The codebase is quite flexible. It supports the finetuning of various types of LMMs, including:

    See supported_models.md for the full list of supported models. For training strategy, 1) full-finetuning, 2) lora, and 3) q-lora are supported for the LLM component, while 1) full-finetuning and 2) lora are supported for the vision encoder/backbone.

    What’s different from other training frameworks, e.g., LLaMA-Factory, xtuner, swift?

    These are great projects/frameworks with large scale and high-degree optimization. However, due to their scale and complexity, they could be less transparent and less easy to get started (e.g., I personally feel quite lost when trying to use those frameworks, with a bunch of questions like “how should I format my data”). This codebase (lmms-finetune) is instead designed to be lightweight and simple, meaning that it’s much more likely for you to quickly get started and be able to know almost every detail of the training process if you want. In other words, this is a minimal workable codebase that supports LMM finetuning, while facilitating quick experiments, flexible modifications, and easy integrations of new models.

    News

    • 2025/01/27: Qwen2.5 family is supported in the transformers-4.49.0.dev0 branch. At the moment you would need to install the latest transformers from github.
    • 2024/12/16: Thanks to the contribution from lavinal712 (Yuqian), training with Llama-3.2-Vision is now supported. Also there is a useful script merge_lora_weights.py added.
    • 2024/10/16: We added LLaVA-Onevision. See a caveat when using LLaVA-Onevision here. Also we updated the collators to stay in line with the new processing of LLaVA models in transformers.
    • 2024/08/28: Finetuning with gradio webui interface is supported. Try python webui.py.
    • 2024/07/30: Finetuning of vision encoder and projector is now supported.
    • 2024/07/25: Several things are improved. We have 1) released a colab notebook demonstrating a full, successful training run with LLaVA-NeXT-Video-7B (happy to hear from people that they succeeded in their cases too); 2) supported having text-only samples in the training set (see this for one note).
    • 2024/07/20: Initial release of the codebase. More models and optimizations are coming soon. Stay tuned!

    Installation

    # clone this repo
    git clone https://github.com/zjysteven/lmms-finetune.git
    
    # set up a conda environment
    conda create -n lmms-finetune python=3.10 -y
    conda activate lmms-finetune
    ## this will install the latest version of torch
    ## feel free to change it to a specific version
    python -m pip install -r requirements.txt
    
    ## optionally install flash attention
    python -m pip install --no-cache-dir --no-build-isolation flash-attn

    Usage

    A workable example training run (of LLaVA-NeXT-Video-7B) is showcased in this colab notebook, which is a good starting point to get a sense of how to use this codebase. The following sections provide a more detailed guide on how to finetune a model.

    0. See if the model you want to finetune is supported

    Browse supported_models.md. Or run python supported_models.py, which will for example show things like

    Supported models:
      Model ID                      : HuggingFace Path
      ------------------------------------------------
      llava-1.5-7b                  : llava-hf/llava-1.5-7b-hf
      llava-1.5-13b                 : llava-hf/llava-1.5-13b-hf
      llava-next-video-7b           : llava-hf/LLaVA-NeXT-Video-7B-hf
      llava-next-video-7b-32k       : llava-hf/LLaVA-NeXT-Video-7B-32K-hf
      llava-next-video-34b          : llava-hf/LLaVA-NeXT-Video-34B-hf
      llava-interleave-qwen-0.5b    : llava-hf/llava-interleave-qwen-0.5b-hf
      llava-interleave-qwen-7b      : llava-hf/llava-interleave-qwen-7b-hf
      llava-onevision-0.5b-ov       : llava-hf/llava-onevision-qwen2-0.5b-ov-hf
      llava-onevision-7b-ov         : llava-hf/llava-onevision-qwen2-7b-ov-hf
      llava-onevision-72b-ov        : llava-hf/llava-onevision-qwen2-72b-ov-hf
      qwen-vl-chat                  : Qwen/Qwen-VL-Chat
      phi3-v                        : microsoft/Phi-3-vision-128k-instruct
      qwen2-vl-2b-instruct          : Qwen/Qwen2-VL-2B-Instruct
      qwen2-vl-7b-instruct          : Qwen/Qwen2-VL-7B-Instruct
      llama-3.2-11b-vision-instruct : meta-llama/Llama-3.2-11B-Vision-Instruct
      llama-3.2-90b-vision-instruct : meta-llama/Llama-3.2-90B-Vision-Instruct
    

    ✋ Don’t see the one you want? Check out this guide for step-by-step instructions on how to add a new model.

    1. Prepare your finetuning data

    Similar to LLaVA, we expect the data to be in a json file containing a list of dictionaries, where each dictionary is a sample.

    [
        {
            "system_prompt": "You are a helpful assistant.",
            "video": "path/to/video1.mp4",
            "conversations": [
                {
                    "from": "human",
                    "value": "<video>What is this video about?"
                },
                {
                    "from": "gpt",
                    "value": "This video shows a baby crying."
                },
            ]
        }
    ]

    The image and video token is assumed to be <image> and <video>. We adopt this format for its readability. Our dataset implementation is general enough to support variations within this format, e.g., multiple image/video inputs in a sample, text-only sample etc. For more details, see the dataset documentation and find how flexible this json file can be. There are also mutiple example json files under example_data for reference.

    Besides this json file, the actual videos and images are by default assumed to be stored in their corresponding folders, and then the paths in the json file should be relative to the video/image root folder. Or the paths can simply be absolute paths.

    ⚠️ If you have text-only entries in your training dataset: the training is likely to fail at some point if 1) your per_device_batch_size is 1, or 2) the number of text-only instances dominate the number of multi-modal instances. This is due to a limitation/bug of deepspeed. If neither of the above two conditions is met, no worries, we got you covered.

    2. Perform finetuning

    Modify the sample training bash script, example_video.sh or example_image.sh (there are no differences other than different model ID and dataset filepath), to specify arguments including the target model, data path, etc. There are comments that explain each argument’s meaning. Then simply kick off the training by running the bash script bash example_scripts/example_video.sh or bash example_scripts/example_image.sh. Note that to exactly run the provided example_video.sh, you will need to download the video clips from ShareGPT4Video; see here for instructions.

    📈If you prefer graphical interface, simply run python webui.py to lauch the gradio interface for finetuning.

    3. Inference with finetuned model

    The key here is to correctly load the finetuned model, after that everything is the same as how you would do inference with the corresponding model from huggingface. Refer to the inference documentation for more details, including how to use merge_lora_weights.py to easily obtain a standalone model. Again you can refer to this colab for a complete example.

    Acknowledgements

    We want to thank the huggingface team for actively integrating newest models in the transformers library. Also, the example finetuning scripts (e.g., this, this, and this) made by HF staff, Niels Rogge and Raushan Turganbay, are very helpful and lay the foundation for this codebase. We also especially thank Raushan Turganbay for her generous discussions and feedbacks on this project.

    The codebase borrows from, is inspired by, or builds upon the following code, repos, and/or libraries: LLaVA, Qwen, transformers, etc.

    Citation

    If you use lmms-finetune in your research/project, we’d be very happy if you could 1) give us a star, 2) share this repo with others, or 3) cite this codebase:

    @software{Zhang_lmms-finetune,
    author = {Zhang, Jingyang and Lin, Yueqian},
    license = {Apache-2.0},
    title = {{lmms-finetune}},
    url = {https://github.com/zjysteven/lmms-finetune}
    }
    

    Visit original content creator repository

  • Game-LogAlpha-MSDOS-TurboPascal

    Log Alpha – Turbo Pascal – MS-Dos – 1992

    Cette page en français.

    Created in February 1992, Log Alpha is one of those logic or IQ test programs. It displays a series of letters, and it’s up to you to find the missing one.

    The program does not generate the series itself. It comes with a game editor.

    I’m providing you with the source codes for the two programs in this project, in anticipation of a new online or Delphi version.

    This game was developed in Turbo Pascal to run under MS-Dos in text mode.

    It’s possible that this program is compilable with Free Pascal but I haven’t tried. Units not supplied can be found in the DeveloppeurPascal/librairies-TurboPascal repository. If any are missing, please let me know so that I can add them.

    You’ll find the binaries (8 bits) and dependencies in the “release” folder. This is the September 19, 1993 version. You can only run it on an MS-Dos emulator, Windows 3.x or 95. There’s no point in trying it on Windows beyond these versions.

    Check out my other open source projects in Turbo Pascal if you’re interested in software archeology.

    Talks and conferences

    Twitch

    Follow my development streams of software, video games, mobile applications and websites on my Twitch channel or as replays on Serial Streameur mostly in French.

    License to use this code repository and its contents

    This source code is distributed under the AGPL 3.0 or later license.

    You are generally free to use the contents of this code repository anywhere, provided that:

    • you mention it in your projects
    • distribute the modifications made to the files supplied in this project under the AGPL license (leaving the original copyright notices (author, link to this repository, license) which must be supplemented by your own)
    • to distribute the source code of your creations under the AGPL license.

    If this license doesn’t suit your needs, you can purchase the right to use this project under the Apache License 2.0 or a dedicated commercial license (contact the author to explain your needs).

    These source codes are provided as is, without warranty of any kind.

    Certain elements included in this repository may be subject to third-party usage rights (images, sounds, etc.). They are not reusable in your projects unless otherwise stated.

    Support the project and its author

    If you think this project is useful and want to support it, please make a donation to its author. It will help to maintain the code and binaries.

    You can use one of those services :

    or if you speack french you can subscribe to Zone Abo on a monthly or yearly basis and get a lot of resources as videos and articles.

    Visit original content creator repository

  • waapitools

    Waapitools

    A collection of Javascript based tools for Wwise, using WAAPI from your browser. Download, unzip anywhere and open index.html to use. Alternatively available on https://waapitools.org for direct use.

    1. Event Soundbank finder

    This tool is used to find out in which soundbank(s) a wwise event is included, upon selection of the event in the Wwise Authoring Tool, regardless of whether the event is directly included in a bank or through the inclusion of one of its parent folders or WWU. This is designed to work around the issue of Wwise’s Reference View ignoring indirect references (a soundbank appears in an event’s references only if the event is directly included in the bank).

    Usage: select an event in Wwise.

    2. Batch Attenuations Editor

    The Batch Attenuations Editor is designed to help with the creation and update of large attenuation sets, by interpolating attenuation curves between a short (eg 1m) and a long (eg 100m) attenuation.

    attenuations

    Usage:

    • Create an attenuations folder or work unit
    • Inside, create 2 attenuations with names ending with their max radius (e.g. 1m, 100m)
    • Edit the attenuations curves, making sure each curve has a matching number of points between the short/long attenuation
    • Select the parent folder
    • In Waapitools, choose how many intermediate attenuations you want to create by setting distance intervals
    • For each curve (volume, aux sends…) and each point, choose how the point’s coordinates will be interpolated from the shortest to the longest attenuation (linear, quadratic, etc.)
    • Commit the new interpolated attenuations to Wwise

    3. Sampler Keymapper

    The Sampler Keymapper automatically sets MIDI root key and key range parameters on children of a blend container if their names contain a note name (eg MyPianoSample_C#3 is mapped to MIDI root key 49 and its range is set based on neighbouring samples root notes).

    keymapper

    Usage:

    • Create a blend container
    • Inside, create the sound objects representing your sampler notes
      • Each sound object name must end with a note name (e.g. C#3, G7, Eb2…)
    • Select the blend container
    • From Waapitools, commit the note’s MIDI roots and key ranges to Wwise

    4. Notes Review

    Notes Review provides a comprehensive display of all notes in the Wwise project. It’s brutally stupid and will freeze on huge projects.

    5. Renamer

    The Renamer allows for easy search/replace in names of objects in the Actor-Mixer Hierarchy. In addition to the selected Actor-Mixer hierarchy object, it will apply the same renaming to:

    • all its children
    • all the events referencing it

    image

    Usage:

    • Select an object in the actor-mixer hierarchy
    • Edit the search and replace strings
    • Check the changes and commit them to Wwise

    6. Creator

    The Creator helps with the batch creation of children to the currently selected object, from an object type, a list of names and a common prefix.

    creator

    Usage:

    • Select an parent object in your Wwise project
    • Choose an object type to create
    • Fill in the list of the names you want to give your new objects
    • Optionnally, choose a common prefix for all your new objects
    • Check the pending objects list
    • Commit the new objects to Wwise

    7. Nester

    The Nester can ‘nest’ selected objects in new parents of your chosen type. The new parents will be nalmed after their children.

    A typical use case would be to create random containers from a list of freshly imported sounds, each set of similarly named sounds getting their own parent.

    nester

    Usage:

    • Select multiple objects in your Wwise project
    • Check the tool groups them as desired, based on their names
    • Choose the type of object you want to create as their new parents
    • Commit the new parents to Wwise
    Visit original content creator repository
  • curl

    curl

    Build Status Scrutinizer Code Quality Code Coverage GPL Licence

    This library is an alternative to “https://github.com/php-curl-class/php-curl-class“. Easy use with CurlRequest and CurlResponse objects.

    Installation

    Install via Composer

    $ composer require php-guard/curl
    

    Requirements

    • php: ^7.1

    Usage

    Quick Start
    require __DIR__ . '/vendor/autoload.php';
    
    use \PhpGuard\Curl\Curl;
    use \PhpGuard\Curl\CurlError;
    
    $this->curl  =  new  Curl();
    	
    try {
    	// Execute a single request
    	$response = $this->curl->get('http://example.com'); // Return a CurlResponse
    	echo $response->raw();
    
    	// Execute multiple requests
    	$responses = $this->curl->multi() // Create a MultiCurl object
    		->get('http://example.com')   // Add a get request
    		->post('http://example.com')  // Add a post request
    		->execute();                  // Return an array of CurlResponse
    
    	foreach ($responses as $response) {
    		echo $response->raw();
    	}
    } catch (CurlError $e) {
    	echo 'Error: ' . $curl->getCode(). ': ' . $curl->getMessage(). "\n";
    }
    
    Available Methods

    \PhpGuard\Curl\Curl

    HTTP methods

    get(string $url, $query = null, array $headers = []): CurlResponse
    post(string $url, $data = null, $query = null, array $headers = []): CurlResponse
    put(string $url, $data = null, $query = null, array $headers = []): CurlResponse
    patch(string $url, $data = null, $query = null, array $headers = []): CurlResponse
    delete(string $url, $data = null, $query = null, array $headers = []): CurlResponse
    

    Run a single request

    execute(CurlRequest  $request): CurlResponse
    

    Prepare multiple requests

    multi(array  $options  = []): MultiCurl
    

    Edit request and response processing (advanced usage)

    getCurlRequestFactory(): CurlRequestFactory
    getRequestModifierPipeline(): RequestModifierPipeline
    

    \PhpGuard\Curl\CurlRequest

    You can use Curl to execute requests or you can use the CurlRequestFactory to return CurlRequest instances.

    Properties url, method and data have getters and setters.

    The method setHeaderContentType(string $contentType) is a shortcut for

    $curlRequest->getHeaders['Content-Type'] = $contentType
    

    Other methods

    execute(bool $throwExceptionOnHttpError = false): CurlResponse
    getCurlOptions(): CurlOptions
    getHeaders(): Headers
    

    \PhpGuard\Curl\MultiCurl

    get(string $url, $query = null, array $headers = []): self
    post(string $url, $data = null, $query = null, array $headers = []): self
    put(string $url, $data = null, $query = null, array $headers = []): self
    patch(string $url, $data = null, $query = null, array $headers = []): self
    delete(string $url, $data = null, $query = null, array $headers = []): self
    execute(): CurlResponse[]
    

    \PhpGuard\Curl\CurlResponse

    statusCode(): int
    isError(): bool // True if status code >= 300
    headers(): Headers
    raw() // Raw content of the response
    json() // Array or false if not a json response
    

    \PhpGuard\Curl\CurlRequestFactory

    Set the base url for all requests

     setBaseUrl(?string $baseUrl): self
    

    Set the option curl CURLOPT_SSL_VERIFYPEER

     setSslVerifyPeer(bool $value): self
    

    Create a CurlRequest

     create(string $method, string $url, $data = null, $query = null, array $headers = []): CurlRequest
    

    Edit other default curl options

     getDefaultCurlOptions(): CurlOptions
     getDefaultHeaders(): Headers
    

    \PhpGuard\Curl\Collection\CurlOptions

    This class implements \ArrayAccess. It can therefore be used as an array.

    \PhpGuard\Curl\Collection\Headers

    This class implements \ArrayAccess. It can therefore be used as an array.

    In addition, the keys are case insensitive.

    \PhpGuard\Curl\RequestModifierPipeline

    Add an object to modify CURL requests

    pipe(RequestModifierInterface $requestModifier): self
    

    By default, FileRequestModifier and PlainTextRequestModifier are active. If necessary, you can add an instance of ProxyRequestModifier

    • ProxyRequestModifier allows you to define curl options to use a proxy

    • FileRequestModifier is used to manage file paths starting with @ and passed as a parameter by transforming them into CurlFile and then modifying the HTTP Content-Type header.

    • PlainTextRequestModifier changes the HTTP Content-Type header to text/plain when a string is passed as a parameter.

    Visit original content creator repository
  • lg-motion

    LG Motion – Android Application for Liquid Galaxy

    The incredible app designed to elevate your Liquid Galaxy experience like never before! With a seamless connection to your Liquid Galaxy rig, you can now control it remotely using facial expressions, voice commands, hand gestures, poses, and even object manipulation all at your fingertips!

    The app utilizes the power of the Mediapipe API for recognizing and processing facial gestures, hand gestures, and body poses, making it easy for users to control their Liquid Galaxy experience through their Android device.

    Table of Contents

    Screenshots

    Light Theme

    Light Theme Screenshots Light Theme Screenshots Light Theme Screenshots Light Theme Screenshots Light Theme Screenshots Light Theme Screenshots

    Dark Theme

    Dark Theme Screenshots Dark Theme Screenshots Dark Theme Screenshots Dark Theme Screenshots Dark Theme Screenshots Dark Theme Screenshots

    Map Styles

    Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots Map Styles Screenshots

    Installation

    Manual Installation

    1. Clone this repository to your local machine using the following command:
    git clone https://github.com/SidharthMudgil/lg-motion.git
    
    1. Open the project in Android Studio.

    2. Build the project and install it on your Android device.

    APK Installation

    1. Download the latest APK file from the “Releases” section of this GitHub repository.

      Download APK Download AAB
    2. Enable “Unknown Sources” in your Android device settings to allow the installation of apps from external sources.

    3. Locate the downloaded APK file and install it on your Android device.

    Contribution

    We welcome contributions to LG Motion from the open-source community. If you want to contribute to the project, follow these steps:

    1. Fork this repository to your GitHub account.

    2. Clone the forked repository to your local machine.

    git clone https://github.com/your-username/lg-motion.git
    
    1. Create a new branch with a specific naming scheme that describes the purpose of your contribution. For example:

      • For adding a new feature: feature-add-new-feature
      • For fixing a specific issue: fix-issue-description
      • For general improvements: enhancement-improvement-description
    2. Make the necessary changes and improvements in your branch.

    3. Test your changes to ensure they are working as expected.

    4. Follow the coding style and naming conventions of the existing codebase.

    5. Commit your changes and push them to your forked repository:

    git commit -m "Add a concise commit message describing your changes"
    git push origin your-branch-name
    
    1. Create a pull request from your branch to the main LG Motion repository.

    2. Pull request will be reviewed, and if everything is satisfactory, it will be merged into the main repository.

    Please provide clear and descriptive pull request messages, along with before and after screenshots, to help with the review process.

    Let’s build LG Motion together and make Liquid Galaxy control even more exciting and accessible for everyone!

    Visit original content creator repository
  • latex-protocol

    A latex protocol template

    Quickstart

    git clone git@github.com:tgm-hit/latex-protocol.git protocol
    cd protocol
    ./maketex

    Contents

    Dependencies

    For local compilation a TeX distribution is required. Both TeX Live and MiKTeX are fully supported. The compilation script requires Python 3.

    Some features might depend on packages which are not installed by default!
    A growing list of optional dependencies can be found here.

    Arch Linux

    sudo pacman -S python-pygments texlive-meta texlive-langgerman texlive-langenglish texlive-langother texlive-fontsextra

    Usage

    With Python 3 and LaTeX installed you can easily compile your project using the maketex script which simplifies the compilation progress, handles multiple source files and removes unnecessary files.
    For most use-cases you only have to run ./maketex which compiles the main.tex file using pdflatex while looking for bibliography and glossary entries.

    Latex

    If (for some reason) you do not want to depend on the maketex script you can also use pdflatex, makeglossaries and bibtex from the shell.

    pdflatex -shell-escape main	# Initial compilation
    makeglossaries main 		# Compile glossaries
    pdflatex -shell-escape main	# Progressive compilation for glossaries
    bibtex main 			# Compile bibliography
    pdflatex -shell-escape main	# Progressive compilation for bibtex
    pdflatex -shell-escape main	# Progressive compilation for bibtex

    TexStudio

    In TexStudio a custom command can be added under OptionsConfigure TexStudioBuildUser Commands. The following line completely compiles a LaTeX file with glossaries, bibliography and minted.

    pdflatex -shell-escape -interaction=nonstopmode % | txs:///makeglossaries | pdflatex -shell-escape -interaction=nonstopmode % | txs:///bibtex | pdflatex -shell-escape -interaction=nonstopmode % | pdflatex -shell-escape -interaction=nonstopmode % | txs:///view-pdf-internal --embedded

    Of course you can also add the maketex script as a user command but you might want to set -lm so TexStudio can find your log files and minted cache after cleanup.

    python maketex -lm | txs:///view-pdf-internal --embedded

    Overleaf

    Overleaf is a popular online latex editor and is also fully supported by this template. Just download the archived repository or latest release and upload as a new project.

    Overleaf usage

    Options

    Options are added to the \documentclass command usually found in main.tex.

    \documentclass[<option1>,<option2>,...]{protocol}

    Option Result
    en Set main document language to english
    landscape Change the page format to landscape orientation
    minted Add and configure minted package
    natbib Change bibtex backend to natbib
    nobib No bibliography
    nofonts No additional fonts
    noglo No acronyms and glossary
    nologos No logos on titlepage
    notable No table on titlepage
    notitle No titlepage
    notoc No table of contents
    parskip Skip a line instead of indenting after blank line
    sans Load sans-serif fonts

    Variables

    Variables are set as commands with their parameter being the variable value.

    \myvariable{value}

    Command Content
    mysubtitle Subtitle of group
    mysubject Thematic group / subject
    mycourse Current course / class
    myteacher Current teacher
    myversion Current version of the document
    mybegin Start of documentation
    myfinish End of documentation


    Visit original content creator repository

  • quickcart

    QuickCart – E-commerce Platform

    Overview

    QuickCart is a full-fledged e-commerce web application built using Django and Tailwind CSS. It provides a user-friendly shopping experience with features like product management, cart functionality, order tracking, authentication, and more. The application is structured with rendered templates instead of an API-based implementation, ensuring a seamless user experience.

    Features

    User Management

    • User registration and login
    • Profile management with avatar upload
    • Password reset and change password functionality
    • Two-factor authentication (2FA) enablement

    Product Management

    • Browse and search for products
    • View product details
    • Add, edit, and delete products (admin only)
    • Product reviews and ratings
    • Wishlist functionality

    Shopping Cart & Checkout

    • Add and remove items from cart
    • Update cart item quantities
    • View cart details with total price
    • Checkout process with order summary
    • Address management for shipping and billing

    Order Management

    • Place orders and track order status
    • View order history
    • Cancel orders if eligible
    • Order confirmation page
    • Payment processing simulation

    Additional Features

    • Dark mode support
    • Responsive UI built with Tailwind CSS
    • Optimized database queries for performance
    • Admin dashboard for order and product management

    Installation

    Prerequisites

    Ensure you have the following installed:

    • Python 3.8+
    • Django 4+
    • SQLite (default) or any supported database
    • Tailwind CSS (configured with Django reload)

    Setup Instructions

    1. Clone the repository:

      git clone https://github.com/JunaidSumsaal/quickcart.git
      cd quickcart
    2. Create a virtual environment and activate it:

      python -m venv venv
      source venv/bin/activate  # On Windows use: venv\Scripts\activate
    3. Install dependencies:

      pip install -r requirements.txt
    4. Apply database migrations:

      python manage.py migrate
    5. Create a superuser:

      python manage.py createsuperuser
    6. Run the development server:

      python manage.py runserver
    7. Open the application in your browser:

      http://127.0.0.1:8000/
      

    Project Structure

    quickcart/
    ├── accounts/         # User authentication and profile management
    ├── cart/             # Shopping cart functionality
    ├── orders/           # Order management and tracking
    ├── products/         # Product catalog and reviews
    ├── templates/        # HTML templates organized by app
    ├── static/           # Static files including Tailwind CSS
    ├── core/             # Settings and configuration
    ├── manage.py         # Django management script
    └── README.md         # Project documentation
    

    Deployment

    This project is configured for deployment on Render. To deploy:

    • Ensure environment variables are set for database and secret key.
    • Use a production-ready WSGI server like Gunicorn.
    • Set up Nginx as a reverse proxy.

    Contributing

    Contributions are welcome! Follow these steps:

    1. Fork the repository.
    2. Create a new branch: git checkout -b feature-branch
    3. Commit your changes: git commit -m "Add new feature"
    4. Push to the branch: git push origin feature-branch
    5. Submit a pull request.

    License

    This project is licensed under the MIT License.

    Visit original content creator repository