Transform text into emojis. A practical end-to-end ML lifecycle: data collection, SFT & DPO training, evaluation, inference with llama.cpp, and observability (Prometheus, Grafana, LangSmith).
The aim of this project is to provide a simple and easy-to-use implementation of LoRA for fine-tuning LLMs. To test the implementation, we tackle the MNLI subtask of the GLUE benchmark.
microViT is a streamlined implementation of the Vision Transformer (ViT) architecture, focusing solely on the encoder component. This project applies the microViT model to the classic task of handwritten digit classification using the MNIST dataset.
microTransformer is based on the raw implementation of the Transformer model, with the encoder and decoder parts. The task we will be working on is the sorting of a sequence of characters. For example, the sequence 'ABCB' will be sorted as 'ABBC'.
Framework inspired in CycleGAN for domain adaptation and generalization using GANs to map images into a universal domain for segmentation tasks, unifying appearance across domains through adversarial training.
Through web scrapping techniques, it is possible to collect images of the works in the Prado museum. We have hundreds of algorithms to detect faces. We can obtain embeddings that encode our faces using Siamese neural networks. We just need to get the closest embedding!
Analysis of the behavior of the Spanish fruit and vegetable market during the pandemic period with data from different sources. National competition obtaining the third place.