PS3 Controller in Arch

Configuration for PS3 Controller in Arch Linux

This quick guide explains how to setup your PS3 controller in Arch Linux making sure the Bluetooth connection works

Requirements

Make sure to install the bluetooth packages and have a working usb device.

sudo pacman -S bluez bluez-utils bluez-plugins

Create the following file /etc/bluetooth/main.conf with the content:

[General]
ClassicBondedOnly=false

And just restart you computer.

Press the Middle PS button until leds start blinking, then connect the usb cable from the controller to the PC and you should se a warning to authorize the device.

After that you can unplug the controller and manage via you bluetooth manager.

NOTE: Take into account that the ClassicalBondedOnly Option is a solution that regresses you security configuration for bluetooth pairing, more details here use at you own risk

References

Scala Variance

Intro

Variance lets you control how type parameters behave with regards to subtyping. Scala supports variance annotations of type parameters of generic classes, to allow them to be covariant, contravariant, or invariant if no annotations are used. The use of variance in the type system allows us to make intuitive connections between complex types.

Variance: If B Extends A, Should List[B] extend List[A] ?

Invariance

By default, type parameters in Scala are invariant: subtyping relationships between the type parameters aren’t reflected in the parameterized type.

trait List[A]

Example

class Box[A](var content: A)

We’re going to be putting values of type Animal in it. This type is defined as follows:

abstract class Animal {
def name: String
}
case class Cat(name: String) extends Animal
case class Dog(name: String) extends Animal

We can say that Cat is a subtype of Animal, and that Dog is also a subtype of Animal. That means that the following is well-typed:

val myAnimal: Animal = Cat("Felix")

What about boxes? Is Box[Cat] a subtype of Box[Animal], like Cat is a subtype of Animal?

val myCatBox: Box[Cat] = new Box[Cat](Cat("Felix"))
val myAnimalBox: Box[Animal] = myCatBox // this doesn't compile

Although this is valid

val myAnimal: Animal = myAnimalBox.content
myAnimalBox.content = Dog("Fido")

From this, we have to conclude that Box[Cat] and Box[Animal] can’t have a subtyping relationship, even though Cat and Animal do.

Covarience

The The problem we ran in to above, is that because we could put a Dog in an Animal Box, a Cat Box can’t be an Animal Box.

trait List[+A]

Example:

class ImmutableBox[+A](val content: A)
val catbox: ImmutableBox[Cat] = new ImmutableBox[Cat](Cat("Felix"))
val animalBox: ImmutableBox[Animal] = catbox // now this compiles

We say that ImmutableBox is covariant in A, and this is indicated by the + before the A.

Contravarience

We’ve seen we can accomplish covariance by making sure that we can’t put something in the covariant type, but only get something out. What if we had the opposite, something you can put something in, but can’t take out?

trait List[-A]

Example:

abstract class Serializer[-A] {
def serialize(a: A): String
}

val animalSerializer: Serializer[Animal] = new Serializer[Animal] {
def serialize(animal: Animal): String = s"""{ "name": "${animal.name}" }"""
}
val catSerializer: Serializer[Cat] = animalSerializer
catSerializer.serialize(Cat("Felix"))

We say that Serializer is contravariant in A, and this is indicated by the - before the A. A more general serializer is a subtype of a more specific serializer.

More formally, that gives us the reverse relationship: given some class Contra[-T], then if A is a subtype of B, Contra[B] is a subtype of Contra[A].

Bounded Types

The type of Variance would lead to have members defined with upper or lower bounded types like the following example.

class Car
class SuperCar extends Car
class Garage[T <: Car>](car: T)

More deatils on bounded values on the following articles:

Immutability and Variance

Immutability constitutes an important part of the design decision behind using variance. For example, Scala’s collections systematically distinguish between mutable and immutable collections. The main issue is that a covariant mutable collection can break type safety. This is why List is a covariant collection, while scala.collection.mutable.ListBuffer is an invariant collection.

Comparison With Other Languages

Variance is supported in different ways by some languages that are similar to Scala. Scala’s tendency towards immutable types makes it that covariant and contravariant types are more common than in other languages, since a mutable generic type must be invariant.

Reference

.NET Aspire Sample Application

Intro

In this article I will go through the setup process to setup a .NET 8.0 Aspire application in Linux from MS Offical Guide

About

Cloud-native apps often require connections to various services such as databases, storage and caching solutions, messaging providers, or other web services. .NET Aspire is designed to streamline connections and configurations between these types of services.

Requirements

In order to use .NET Aspire one needs to have .NET 8.0. As I’m using Ubuntu for this article I’ve followed this steps to prepare the enviroment

Install .NET SDK and Runtime

sudo apt-get update &&   sudo apt-get install -y dotnet-sdk-8.0
sudo apt-get update && sudo apt-get install -y aspnetcore-runtime-8.0

NOTE: For some reason in Ubuntu 22.04 there seems exit conflicts between donet-sdk-8 and dotnet-sdk-6, as such I had to remove the older versions. Take that into account if you have other projects to maintain with those versions as you may need more work

Check existing versions

Execute the following command to validate you have 8.0 available

dotnet --list-runtimes

You should have something similar for the runtimes

Microsoft.AspNetCore.App 8.0.0 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 8.0.0 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

And now the SDK

dotnet --list-sdks
8.0.100 [/usr/share/dotnet/sdk]

Check the templates to make sure we have the aspire

Update Workloads

sudo dotnet workload update
sudo dotnet workload install aspire

Check existing Templates

Run the following to make sure you have the required templates

dotnet new --list | grep aspire

You should get something like:

.NET Aspire Application                       aspire                      [C#]        Common/.NET Aspire/Cloud/Web/Web API/API/Service       
.NET Aspire Starter Application aspire-starter [C#] Common/.NET Aspire/Blazor/Web/Web API/API/Service/Cloud

Generate the Application

To generate the application run the following command:

dotnet new aspire-starter --use-redis-cache --output AspireSample

About the Application

The solution consists of the following projects:

  • AspireSample.ApiService: An ASP.NET Core Minimal API project is used to provide data to the front end. This project depends on the shared AspireSample.ServiceDefaults project.
  • AspireSample.AppHost: An orchestrator project designed to connect and configure the different projects and services of your app. The orchestrator should be set as the Startup project, and it depends on the AspireSample.ApiService and AspireSample.Web projects.
  • AspireSample.ServiceDefaults: A .NET Aspire shared project to manage configurations that are reused across the projects in your solution related to resilience, service discovery, and telemetry.
  • AspireSample.Web: An ASP.NET Core Blazor App project with default .NET Aspire service configurations, this project depends on the AspireSample.ServiceDefaults project. For more information, see .NET Aspire service defaults.

Run the Application

To run the application execute

dotnet run --project AspireSample/AspireSample.AppHost

You will get a similar output

Building...
info: Aspire.Dashboard.DashboardWebApplication[0]
Now listening on: http://localhost:15214
info: Aspire.Dashboard.DashboardWebApplication[0]
OTLP server running at: http://localhost:16176
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {13491cb7-f7b7-4ace-9ab5-0a6b77bf559f} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/rui.ramos/Development/local/aspire-test/AspireSample/AspireSample.AppHost

Checkout on your browser the Aspire Dashboard at http://localhost:15214

And the webfront application http://localhost:5119

AspireSample.AppHost

Checking the Program.cs we can see relevant information

var builder = DistributedApplication.CreateBuilder(args);

var cache = builder.AddRedisContainer("cache");

var apiservice = builder.AddProject<Projects.AspireSample_ApiService>("apiservice");

builder.AddProject<Projects.AspireSample_Web>("webfrontend")
.WithReference(cache)
.WithReference(apiservice);

builder.Build().Run();

The preceding code creates a DistributedApplication builder adding a Redis Container the APIService and the Sample Web Application

Why .NET Aspire?

.NET Aspire is designed to improve the experience of building .NET cloud-native apps. It provides a consistent, opinionated set of tools and patterns that help you build and run distributed apps. .NET Aspire is designed to help you with:

  • Orchestration: .NET Aspire provides features for running and connecting multi-project applications and their dependencies.
  • Components: .NET Aspire components are NuGet packages for commonly used services, such as Redis or Postgres, with standardized interfaces ensuring they connect consistently and seamlessly with your app.
  • Tooling: .NET Aspire comes with project templates and tooling experiences for Visual Studio and the dotnet CLI help you create and interact with .NET Aspire apps.

References

Ngrok

Intro

In this article I configure a local endpoint using ngrok for testing purposes.

What is ngrok ?

ngrok is a globally distributed reverse proxy that secures, protects and accelerates your applications and network services, no matter where you run them. You can think of ngrok as the front door to your applications

Requirements

For this guide I will use snap. Checkout the Official guide for other alternatives.

snap install ngrok

Configuration

Next step create an account with the Free tier at https://ngrok.com.

NOTE: This option is intended only for testing purposes for Production workloads you should consider a different option based on your network usage. Use this instructions at your own risk

When you access the service you can get your token run the folloing command to add it to your local configuration

ngrok config add-authtoken <TOKEN>

you can run the following command to make sure your configuration checks out

Test

Open two terminals.
On the first let’s start a listener using netcat

nc -l -p 9393

On the second one let’s spin up ngrok

ngrok tcp localhost:9393

You will be presented with a forward URL ( check Forwarding option on the output )

For my example it was: tcp://0.tcp.eu.ngrok.io:15537

If you go through a browser and choose http://0.tcp.eu.ngrok.io:15537/

You should see the http GET request on the terminal for the listener

GET / HTTP/1.1
Host: 0.tcp.eu.ngrok.io:15537
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,pt-PT;q=0.8,pt;q=0.7

You could use a much more interesting example like MockingBird or some other mocking data service, but for reference this will do.

Another point to take into consideration is the security part.
When you create the enpoint you can define a differente security mechanism (advisible) like Oauth.

You could run something like the following to use Google OAuth:

ngrok http http://localhost:8080 --oauth=google --oauth-allow-email=<YOUR_EMAIL>

Take into account that IP control will require that you upgrade your plan

Conclusion

ngrok seems to be very easy to setup. Although I didn’t go in depth on the CI/CD configuration their Offical documentation has more information on that one (example for Github integration). Check there documentation regarding Production usecases.

For me personally I think is quite interesting if you want to speed up development and provide a local endpoint quickly to a client to assess in a secure and reliable way. Or quicky provide a webhook for testing something.

Let me know if you know similar alternatives that I should look into.

References

DuckDB Mock Environment

Intro

In this article I will go through DuckDB an in-process SQL OLAP database management system and setup some mock data to do some tests

Setup

In order to setup duckdb client install the following pip packages.

pip install -U duckcli
pip install duckdb==0.9.2

The duckcli allow autocompletion on the terminal

NOTE: There is other client options like ODBC or Node checkout the offical page for those cases.

Generate Mock Data

DuckDB reads data directly from files, and support CSV, Parquet, JSON, Excel and more.

Now lets generate some data and import to our SQL Engine.

jafgen --years 1

This will generate 6 CSV files with mock data

jaffle-data/
├── raw_customers.csv
├── raw_items.csv
├── raw_orders.csv
├── raw_products.csv
├── raw_stores.csv
└── raw_supplies.csv

lets start the client and create managed tables for each

duckcli mydatabase.db

The CSVs have a header line so lets create the following tables using the commands

CREATE TABLE raw_items AS SELECT * FROM read_csv_auto('jaffle-data/raw_items.csv',header = true);
CREATE TABLE raw_orders AS SELECT * FROM read_csv_auto('jaffle-data/raw_orders.csv',header = true);
CREATE TABLE raw_products AS SELECT * FROM read_csv_auto('jaffle-data/raw_products.csv',header = true);
CREATE TABLE raw_stores AS SELECT * FROM read_csv_auto('jaffle-data/raw_stores.csv',header = true);
CREATE TABLE raw_supplies AS SELECT * FROM read_csv_auto('jaffle-data/raw_supplies.csv',header = true);

There are other options like definie a diferent delimiter or specifying the columns. Check the official page for more details on CSV Import.

mydatabase.db> show tables;
+--------------+
| name |
+--------------+
| raw_items |
| raw_orders |
| raw_products |
| raw_stores |
| raw_supplies |
+--------------+
5 rows in set
Time: 0.030s

Disclaimer

DuckDB seems to be blazing fast it also has the option to run in-memmory. It is important however to identify in which usecases this backend could present benefits or not.

When or Not to use

DuckDB aims to automatically achieve high performance by using well-chosen default configurations and having a forgiving architecture. Of course, there are still opportunities for tuning the system for specific workloads. The Performance Guide’s page contain guidelines and tips for achieving good performance when loading and processing data with DuckDB.

When to use DuckDB

  • Processing and storing tabular datasets, e.g., from CSV or Parquet files
  • Interactive data analysis, e.g., join & aggregate multiple large tables
  • Concurrent large changes, to multiple large tables, e.g., appending rows, adding/removing/updating columns
  • Large result set transfer to client

When to not use DuckDB

  • High-volume transactional use cases (e.g., tracking orders in a webshop)
  • Large client/server installations for centralized enterprise data warehousing
  • Writing to a single database from multiple concurrent processes
  • Multiple concurrent processes reading from a single writable database

Conclusion

In this article I went through the process to setup duckdb in a local environment and load some data into it. This database has some interesting bechmark values, I would suggest you try this one out especially if your usecase doesn’t involve transactional data or multiple concurrent processes reading from a single writable databse there for staging processes, development enviroments or single threaded CDC process seems very interesting.

I will certainly use this more in the future. Also very poisitive feedback regarding the documentation you can find on the Offical website.

I haven´t found direct support for Delta yet, although it supports Parquet.

If you wan’t to understand better why to choose DuckDB please check this article Why DuckDB

References