Uncategorized

PQC ML

Post-Quantum Cryptography (PQC) Basics Quantum computers threaten current public-key algorithms like RSA and ECDH via Shor's algorithm, which factors large numbers or computes discrete logs efficiently. PQC develops algorithms resistant to both classical and quantum attacks. NIST standardized the first set in 2024:

  • ML-KEM for key encapsulation (key exchange).
  • ML-DSA and SLH-DSA for signatures.

This matters for your work in certificate/identity systems: TLS secures credential issuance, revocation, and verification. Migrating to PQC protects against "harvest now, decrypt later" attacks, where adversaries store encrypted traffic today for future quantum decryption.

CRYSTALS-Kyber vs. ML-KEM CRYSTALS-Kyber is the original lattice-based Key Encapsulation Mechanism (KEM) from the CRYSTALS team, submitted to NIST's PQC process. It reached round 3 and won selection in 2022.

NIST standardized it as ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism) in FIPS 203 (August 2024). ML-KEM derives directly from Kyber (round 3 version) with minor tweaks:

  • Better domain separation in key generation to prevent cross-protocol attacks.
  • Slight changes in seed generation for public values.

They are essentially the same algorithm—many implementations still call it "Kyber," but the official standard is ML-KEM.

How ML-KEM (Kyber) Works It bases security on the Module Learning With Errors (MLWE) problem, a structured lattice problem hard even for quantum computers.

High-level flow:

  1. Key Generation
    • Randomly generate a public matrix A (from a seed).
    • Sample small secret vector s and error e.
    • Public key pk = (A, t), where t ≈ A·s + e (compressed to reduce size).
    • Private key sk includes s.
  2. Encapsulation (Sender)
    • To share a secret with pk holder: sample randomness, compute noisy equations.
    • Output: ciphertext ct (encapsulated key) + shared secret ss (32 bytes).
  3. Decapsulation (Receiver)
    • Use sk to remove noise from ct and recover the same ss.

Security achieves IND-CCA2 via the Fujisaki-Okamoto transform and explicit rejection for invalid ciphertexts. Compression keeps sizes reasonable.

Security levels:

  • ML-KEM-512 → ~AES-128.
  • ML-KEM-768 → ~AES-192 (most common for TLS hybrids).
  • ML-KEM-1024 → ~AES-256.

Public keys ~1-1.5 KB, ciphertexts similar—larger than ECDH but practical.

Co6GC: Introduction to Lattice-based Cryptography (Part 1) - COSIC

esat.kuleuven.be

Kyber KEM

asecuritysite.com

TLS 1.3 Hybrid Key Exchange using X25519Kyber768 / ML-KEM

Let's break down each term one by one, like you're completely new to this. We'll use everyday analogies—no heavy math.

CRYSTALS

  • This is just a project name, not a thing by itself.
  • It stands for Cryptographic Suite for Algebraic Lattices.
  • Think of it like a "brand" or "team name" from a group of researchers who built two quantum-safe tools:
    • Kyber (for key exchange)
    • Dilithium (for digital signatures)
  • So when people say "CRYSTALS-Kyber", they're talking about the Kyber algorithm made by the CRYSTALS team.

Kyber

  • The original name of the algorithm.
  • It's a way for two computers (like a browser and a server) to agree on a secret key over the internet, even if someone is listening.
  • Analogy: Imagine you and a friend want to share a secret password, but you're mailing it through a public post office where spies can read everything. Kyber is like putting the password in a magic box that only your friend can open—even if the spy sees the box.

KEM (Key Encapsulation Mechanism)

  • This is the type of tool Kyber is.
  • Simple definition: A KEM is a secure way to "wrap up" (encapsulate) a random secret key and send it to someone.
  • The sender creates a small package (ciphertext) that contains the secret.
  • Only the person with the private key can unwrap it and get the same secret.
  • Everyone else sees garbage.
  • It's used for key exchange in things like TLS (the "https" lock in your browser).

ML (Module-Lattice)

  • This is the math foundation that makes Kyber secure.
  • "Lattice" = imagine a giant grid of points in many dimensions (like a 3D chessboard but with 512 dimensions). Finding the shortest path on this grid is super hard—even for quantum computers.
  • "Module" = a structured way to build these grids so the math is efficient and fast on normal computers.
  • You don't need to understand the math deeply—it's just why Kyber is believed to be quantum-safe.

Putting It All Together: Kyber → ML-KEM

  • Original research name: CRYSTALS-Kyber
  • NIST (the US standards body) picked it as the winner for quantum-safe key exchange.
  • They renamed the standardized version to ML-KEM (Module-Lattice-based Key Encapsulation Mechanism) and published it as an official standard (FIPS 203) in 2024.
  • So today:
    • Kyber = the original friendly name everyone still uses.
    • ML-KEM = the official government-standard name.
    • They are basically the same thing with tiny safety tweaks.

Why This Matters for Your Job (Certificates & Identity Systems) Certificates are delivered and validated over TLS connections. Right now TLS mostly uses old algorithms (like ECDH) that a big quantum computer could break in the future. Switching the key exchange part to ML-KEM (Kyber) protects credential issuance, enrollment, and revocation against future quantum attacks—especially important for long-lived certificates or sensitive identity systems.

Practical Part – See It in Action (Super Easy)

  1. Real-World Example You Can Try Today Go to a test site that supports hybrid post-quantum TLS: https://pq.cloudflareresearch.com (Cloudflare's PQ test page)
    • Open it in Chrome or Firefox (recent versions).
    • Open Developer Tools → Security tab.
    • You might see "X25519MLKEM768" or similar in the key exchange—this is classical + ML-KEM combined (hybrid mode). This shows a real certificate being delivered over a quantum-safe connection.
  2. Tiny Code Snippet (Python with liboqs-python) If you have Python and want to play locally:Bashpip install liboqs-pythonPythonfrom oqs import KeyEncapsulation # Create ML-KEM-768 (most common level) kem = KeyEncapsulation("ML-KEM-768") # Server generates keys public_key = kem.generate_keypair() # Client encapsulates (sends secret) ciphertext, shared_secret_client = kem.encapsulate(public_key) # Server decapsulates (gets same secret) shared_secret_server = kem.decapsulate(ciphertext) print("Secrets match?" , shared_secret_client == shared_secret_server) # True!Run this script → it prints "True". You've just done a real post-quantum key exchange in ~10 lines.

This is the same mechanism that will protect future certificate transport. Next time you can ask about the signature side (ML-DSA / Dilithium) or how hybrids work in real TLS handshakes!

cyber-posts

AES

Core Idea of AES

AES (Advanced Encryption Standard) is a symmetric block cipher — meaning the same key is used for both encryption and decryption. It is the most widely used encryption algorithm today and is the official U.S. government standard (FIPS 197) for protecting sensitive but unclassified data.

Key points:

  • It operates on fixed-size 128-bit blocks of data (16 bytes).
  • It supports three key sizes: 128, 192, or 256 bits.
  • AES-128 uses a 128-bit key, AES-256 uses a 256-bit key.
  • The algorithm is the same for all key sizes; the only differences are the key length and the number of rounds (iterations) it performs.

Why the different key sizes?

  • Larger keys provide higher security against brute-force attacks.
  • AES-128 is considered secure for most applications today (no practical attacks exist).
  • AES-256 is used when regulations or policies require "higher assurance" (e.g., some government or financial systems, or when protecting data for decades into the future).

How AES Works (High-Level)

AES encrypts a 128-bit block through a series of rounds. Each round applies four transformations:

  1. SubBytes – Non-linear substitution: each byte is replaced using a fixed lookup table (S-box). This provides confusion.
  2. ShiftRows – Bytes in each row of the 4×4 state matrix are cyclically shifted left. This spreads data across the block (diffusion).
  3. MixColumns – Each column is multiplied by a fixed matrix over GF(2⁸). Further diffusion.
  4. AddRoundKey – The round key (derived from the main key) is XORed with the state.

The process:

  • An initial AddRoundKey with the original key.
  • Then Nr-1 full rounds (all four steps).
  • A final round with no MixColumns.

Number of rounds (Nr):

  • AES-128 → 10 rounds
  • AES-192 → 12 rounds
  • AES-256 → 14 rounds

Key expansion: The original key is expanded into a set of round keys (one per round) using a key schedule algorithm.

Modes of operation (important in practice):
AES by itself only encrypts one 128-bit block. To encrypt larger messages securely, we use modes like:

  • CBC (legacy, needs padding and IV)
  • CTR (turns AES into a stream cipher)
  • GCM (Galois/Counter Mode) – most common today: provides both confidentiality and authenticated encryption (integrity + authenticity).

Security Summary

  • No practical break of AES exists (as of 2026).
  • Best known attacks are theoretical and reduce the effective security only marginally.
  • AES-128 offers ~128 bits of security.
  • AES-256 offers ~256 bits (overkill for most threats except quantum computers in the distant future).

In your domain (identity/credentials/certificates):

  • AES is frequently used to encrypt private keys in keystores, protect credential databases, or encrypt data at rest.
  • In protocols like TLS, AES-GCM is the dominant cipher suite.
  • Many credential formats (e.g., encrypted JWTs, password vaults, secure enclaves) rely on AES.

Practical Part

Here’s a short, real-world Python example using the widely available cryptography library (the de-facto standard in Python for secure crypto). It shows AES-128-GCM and AES-256-GCM encryption/decryption of a simple message.

from os import urandom
from cryptography.hazmat.primitives.ciphers.aead import AESGCM

def encrypt_aes(key_bit_size: int, plaintext: bytes) -> bytes:
    # Generate a random 128-bit (16-byte) nonce (IV) - required for GCM
    nonce = urandom(12)

    # Key size determines AES-128 or AES-256
    key = urandom(key_bit_size // 8)  # 16 bytes for 128-bit, 32 bytes for 256-bit

    aesgcm = AESGCM(key)
    ciphertext = aesgcm.encrypt(nonce, plaintext, associated_data=None)

    # Return nonce + ciphertext (common format)
    return nonce + ciphertext

def decrypt_aes(key_bit_size: int, key: bytes, data: bytes):
    nonce = data[:12]
    ciphertext = data[12:]

    aesgcm = AESGCM(key)
    plaintext = aesgcm.decrypt(nonce, ciphertext, associated_data=None)
    return plaintext

# Example usage
message = b"Secret credential data"

# AES-128-GCM
ct_128 = encrypt_aes(128, message)
print("AES-128 ciphertext length:", len(ct_128))

# AES-256-GCM
ct_256 = encrypt_aes(256, message)
print("AES-256 ciphertext length:", len(ct_256))

# To decrypt, you need the exact same key used for encryption
# (In real code you would securely store/load the key)

Key observations:

  • Ciphertext length = nonce (12 bytes) + plaintext length + 16-byte authentication tag.
  • Changing just one bit in the key or nonce produces completely different ciphertext.
  • Never reuse nonce with the same key (GCM requirement).

Try running this snippet locally (install with pip install cryptography). It’s only ~30 lines and gives you immediate hands-on feel for the difference between AES-128 and AES-256 in practice.

pics wikipedia:

c github:

cyber-posts

Bouncy Castle crypto

What is Bouncy Castle?

Bouncy Castle is an open-source cryptography library for Java (and C#). It provides a clean, well-maintained implementation of a very wide range of cryptographic algorithms and standards. The project is run by the Legion of the Bouncy Castle, a non-profit group in Australia, and has been actively developed since 2000.

The two main APIs are:

  • Lightweight API – direct, low-level access to algorithms (similar to how you’d use them in C).
  • JCA/JCE Provider API – implements the standard Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE), so you can plug it in as an additional security provider alongside the built-in Sun/Oracle providers.

Why use Bouncy Castle in certificate/identity/credential systems?

In your domain (handling X.509 certificates, digital signatures, PKCS objects, credential issuance/validation), Bouncy Castle is extremely common for several practical reasons:

  1. Broader algorithm support
    Java’s default providers support only a subset of algorithms (and some are restricted by old export rules). Bouncy Castle adds many modern and legacy algorithms: ECDSA with named curves, EdDSA (Ed25519/Ed448), RSA-PSS, CMAC, GCM with large nonces, newer hash functions, etc.
  2. Excellent ASN.1 and certificate handling
    X.509 certificates, CRLs, OCSP responses, PKCS#10 requests, PKCS#12 keystores, CMS signed data, etc., are all complex ASN.1 structures. Bouncy Castle has battle-tested parsers/generators that are more robust and feature-complete than the built-in java.security.cert classes.
  3. Provider model integration
    You can register Bouncy Castle as a JCA provider with one line of code:
   Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider());

After that, standard Java APIs (KeyStore, CertificateFactory, Signature, Cipher, etc.) automatically gain access to all Bouncy Castle algorithms without changing your existing code.

  1. FIPS-compliant variants
    There are separate Bouncy Castle FIPS jars that are certified for use in regulated environments (government, finance, healthcare).
  2. Used by almost every major Java library
    Libraries like Apache PDFBox, iText, Spring Security, Bouncy Castle is often already on the classpath indirectly. Using it directly avoids reinventing the wheel.
  3. Lightweight and performant
    Compared to alternatives like OpenSSL bindings (JNI overhead) or commercial libraries, Bouncy Castle is pure Java, easy to bundle, and has good performance for most use cases.

When you might not need it: If you only use very basic algorithms (RSA 2048 + SHA-256 + PKCS#12) and stay within the default Java provider limits, you can avoid the extra dependency. But as soon as you touch modern elliptic curves, post-quantum prep, or complex certificate extensions, Bouncy Castle becomes the de-facto choice.

Practical part – simple code example

Here’s a minimal, self-contained example that shows:

  • Adding Bouncy Castle as a provider
  • Generating a self-signed X.509 certificate with ECDSA (something the default provider can’t do easily on older JDKs)
import java.math.BigInteger;
import java.security.*;
import java.util.Date;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.x509.X509V3CertificateGenerator; // Legacy but simple API
import javax.security.auth.x500.X500Principal;

public class BouncyCastleExample {
    public static void main(String[] args) throws Exception {
        // 1. Register Bouncy Castle provider (do this once at app startup)
        Security.addProvider(new BouncyCastleProvider());

        // 2. Generate an EC key pair (curve prime256v1 = NIST P-256)
        KeyPairGenerator kpg = KeyPairGenerator.getInstance("EC", "BC");
        kpg.initialize(256); // or use named curve: new ECGenParameterSpec("prime256v1")
        KeyPair kp = kpg.generateKeyPair();

        // 3. Generate a self-signed certificate (valid for 1 year)
        X509V3CertificateGenerator certGen = new X509V3CertificateGenerator();
        X500Principal dn = new X500Principal("CN=Test Self-Signed Cert");

        certGen.setSerialNumber(BigInteger.valueOf(System.currentTimeMillis()));
        certGen.setIssuerDN(dn);
        certGen.setNotBefore(new Date());
        certGen.setNotAfter(new Date(System.currentTimeMillis() + 365L * 24 * 60 * 60 * 1000));
        certGen.setSubjectDN(dn);
        certGen.setPublicKey(kp.getPublic());
        certGen.setSignatureAlgorithm("SHA256withECDSA");

        java.security.cert.X509Certificate cert = certGen.generate(kp.getPrivate(), "BC");

        System.out.println("Generated certificate:");
        System.out.println(cert);
    }
}

Dependencies (Maven):

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-jdk18on</artifactId> <!-- or newer jdk version -->
    <version>1.78.1</version> <!-- check latest at https://www.bouncycastle.org/latest_releases.html -->
</dependency>

Run this snippet in any Java project and you’ll see a printed X.509 certificate that uses ECDSA – something you’d need in modern credential systems (e.g., for shorter signatures or post-quantum readiness).

This directly connects to your work: whenever you need to generate, parse, validate, or sign certificates programmatically in Java, Bouncy Castle is usually the safest and most capable choice.

cyber-posts

logical channels 7816

Logical Channels in ISO/IEC 7816-4

ISO/IEC 7816-4 defines logical channels as a mechanism to run multiple independent sessions on a single physical smart card connection. Each channel maintains its own state:

  • Currently selected application (DF – Dedicated File)
  • Current file (EF – Elementary File)
  • Security status (e.g., PIN verified, keys in use)

This is essential for multi-application cards (common in eID, health cards, or credential systems) where you might need to access identity data and a signing certificate at the same time without deselecting one to use the other.

Key facts

  • Channel 0 → basic channel, always open at card reset (ATR), cannot be closed.
  • Additional channels → typically 1–3 (total 4 channels); some modern/JavaCard implementations support up to 19.
  • Channels are independent → no interleaving of command-response pairs across channels (you finish one command before the next, but you can switch channels between commands).
  • Support level → indicated by the card (e.g., in ATR historical bytes or card capabilities data objects).

How the channel number is encoded (most common case – channels 0–3)
The channel number is in the CLA byte of every APDU, in bits b2 and b1:

Channelb2 b1CLA example (standard interindustry, no SM/chaining)
0000x00
1010x01
2100x02
3110x03

To switch channel in code: cla = (base_cla & 0xFC) | channel_number (clears low bits then ORs the number).

Opening and closing channels
Two ways:

  1. Implicit (most common): Send any command (usually SELECT) with CLA indicating a closed channel → the card opens it automatically if supported.
  2. Explicit: Use the MANAGE CHANNEL command (INS = 0x70).

MANAGE CHANNEL details

  • Open (P1 = 0x00):
  • P2 = 0x00 → card assigns an available channel and returns it.
  • P2 = 0x01–0x03 → request a specific channel.
  • Le = 0x01 if P2=0x00 (to get the assigned number), absent otherwise.
  • Close (P1 = 0x80):
  • P2 = channel number to close (bits b2 b1).

Practical examples (APDUs in hex)

  1. Explicit open, card assigns (sent on channel 0):
   Command:  00 70 00 00 01
   Response: 02 90 00   ← card assigned channel 2 (data byte = 02)
  1. Explicit open, request channel 1:
   Command:  00 70 00 01
   Response: 90 00        ← success, no data needed
  1. Use the channel – SELECT an application on channel 1:
   Command:  01 A4 04 00 0A A0 00 00 00 63 50 4B 43 53 2D 31 35   ← example PKSC#15 AID
  1. Close channel 1 (sent on channel 0):
   Command:  00 70 80 01
   Response: 90 00
  1. Implicit open example – directly open channel 2 and select:
   Command:  02 A4 04 00 ... (AID)

Quick code snippet (Python-like with pcsc/pyscard style)

def send_on_channel(conn, base_cla, ins, p1, p2, data=b'', le=None, channel=0):
    cla = (base_cla & 0xFC) | channel   # set channel in low bits
    apdu = bytes([cla, ins, p1, p2]) + len_field + data + le_field
    response, sw1, sw2 = conn.transmit(apdu)
    return response, sw1, sw2

# Example: open channel 1 implicitly by selecting on it
send_on_channel(conn, 0x00, 0xA4, 0x04, 0x00, data=aid, channel=1)

This gives you concurrent access – perfect for credential systems where one channel can stay authenticated to an identity app while another performs operations in a different app. Tomorrow you can build on this (e.g., how security status can be shared or isolated across channels).

cyber-posts

CMAC

CMAC (Cipher-based Message Authentication Code) is a symmetric-key cryptographic algorithm that produces a fixed-size authentication tag (usually 8–16 bytes) for a message. It proves two things:

  • Integrity: the message was not tampered with.
  • Authenticity: the message came from someone who knows the shared secret key.

It is defined in NIST SP 800-38B (Recommendation for Block Cipher Modes of Operation: the CMAC Mode for Authentication) and is based on a block cipher like AES (AES-CMAC is the most common variant).

Key properties compared to similar algorithms:

AlgorithmKey TypeBase PrimitiveVariable-length safe?Common Use Cases
CMACSymmetricBlock cipher (e.g. AES)Yes (built-in)Protocols needing block-cipher-based MAC (EMV payments, some IoT, secure elements)
HMACSymmetricHash function (e.g. SHA-256)YesMost internet protocols (JWT HS256, TLS, OAuth)
CBC-MACSymmetricBlock cipherOnly for fixed-lengthLegacy systems (CMAC fixes its flaws)

CMAC is especially useful when you already have a block cipher (AES) in your system (e.g. for encryption) and want a MAC without introducing a separate hash function.

How CMAC Works (High-Level)

  1. Key: A symmetric block-cipher key (e.g. 128-bit AES key).
  2. Subkey generation:
    • Encrypt an all-zero block with the cipher → Rb.
    • Derive two subkeys K1 and K2 by left-shifting and conditional XOR with a constant (multiplication in GF(2¹²⁸)). This is done once per key.
  3. Message processing:
    • Split the message into blocks.
    • Run standard CBC-MAC (chain encryption with XOR).
    • For the last block:
      • If the message length is a multiple of the block size → XOR K1.
      • Otherwise → pad to full block, then XOR K2.
    • The final ciphertext block is the MAC tag (often truncated).

This subkey trick makes CMAC secure for arbitrary-length messages (unlike plain CBC-MAC, which is only safe for fixed-length).

Relevance to Identity & Credential Systems

In your domain (certificates, identity, credentials):

  • Digital certificates (X.509) usually use asymmetric signatures (RSA/ECDSA) for public verifiability.
  • Symmetric MACs like CMAC appear when:
    • A shared secret exists (e.g. device-to-server mutual authentication).
    • Hardware security modules (HSMs) or secure elements use AES-CMAC.
    • Payment/contactless cards (EMV) use AES-CMAC for transaction authentication.
    • Some IoT credential bootstrapping or attested credentials in constrained environments.
    • Protocols that already use AES encryption and want a matching MAC without pulling in SHA-2.

If your system ever handles symmetric-key protected credentials or needs to authenticate data inside a secure channel, CMAC is a standard choice.

(c) from crypto stack exchange

cyber-posts

the P71

P71 Card/Chip Overview

The "P71" refers to NXP Semiconductors' SmartMX3 P71 series (e.g., P71D321, P71D320), a family of secure microcontrollers designed as the core chip in modern smart cards. It's one of the most widely used platforms for high-security applications, especially in identity and credential systems like yours.

In your field of certificates, identity, and credentials, the P71 is highly relevant: it's the hardware foundation for many national eID cards, ePassports, driver's licenses, health cards, and PIV-style enterprise credentials. It securely stores private keys, X.509 certificates, biometric data, and executes cryptographic operations for authentication, signing, and access control—all while resisting physical and logical attacks.

How It Works

  • Hardware Architecture
    The chip features a secure RISC CPU with dedicated crypto coprocessors (Fame3 for RSA/ECC, AES/DES engines, PUF for unique device keys, TRNG for randomness). It includes tamper-resistant sensors (light, voltage, glitch detection) and IntegralSecurity 3.0 countermeasures against side-channel and fault attacks. Memory options reach up to 500 KB non-volatile (Flash/EEPROM) for code/data, plus RAM. Dual-interface support covers contact (ISO 7816) and contactless (ISO 14443 Type A, up to 848 kbit/s).
  • Software/OS Layer
    It typically runs JCOP4 (NXP's Java Card OpenPlatform implementation): Java Card 3.0.5 Classic + GlobalPlatform 2.3. This allows multiple independent applets (e.g., one for eID authentication, one for qualified electronic signature, one for payment/EMV). Applets are post-issuance loadable and deletable in secure ways.
  • Key Security Features
    Certifications include Common Criteria EAL6+ (highest for smart card OS), EMVCo, FIPS 140-3 on some configs. It supports protocols like PACE-CAM (for contactless privacy), EAC (Extended Access Control for biometrics), BAC/SAC for ePassports.
  • Why It's Common in Identity Systems
    Governments choose it because it balances performance (<2s for ePassport SAC), large memory for multiple certs/data groups, and proven resistance to attacks. Over 7 billion SmartMX chips have shipped globally.

Here's a simplified block diagram of the architecture:

Practical Examples

Real-world cards using P71/JCOP4:

  • Many modern eID cards (e.g., Slovenian eID uses P71 with JCOP4).
  • Some fuel/loyalty cards in Africa (dual-purpose with payment + ID).
  • Enterprise PIV cards (US gov-compatible) and FIDO2 security keys.

Here are photos of actual cards built on this chip:

Quick Practical Check: Identifying a P71 Card

Most JCOP4 P71 cards return a distinctive ATR (Answer To Reset). Example from a real Slovenian eID:

3B D5 18 FF 81 91 FE 1F C3 80 73 C8 21 10 0A

You can detect it in software like this (short Python snippet using pyscard – great for testing in credential systems):

from smartcard.System import readers
from smartcard.util import toHexString

r = readers()
if not r:
    print("No reader found")
else:
    reader = r[0]
    connection = reader.createConnection()
    connection.connect()
    atr = toHexString(connection.getATR())
    print("ATR:", atr)
    # Typical P71/JCOP pattern starts with 3B ...

If you see an ATR starting with 3B … FE 45 … or similar with "JCOP" in historical bytes, it's very likely a P71-based card.

Next Step Connection
When working with certificates on these chips, you'll often deal with EF.CV certificates in the eID applet (e.g., authentication cert, signing cert, QES cert). The secure storage and on-card key generation make them ideal for verifiable credentials—private keys never leave the chip.

Let me know the next term or how this ties into your current project!

cyber-posts

TLS

SI session identifier in an authentication scheme symmetric asymmetric 1-step ake 2-step ake whatever. what is the SI??

What is SI (Session Identifier)?

In authentication and key exchange protocols (AKE = Authenticated Key Exchange), the Session Identifier (SI or commonly sid) is a value that uniquely labels a specific run of the protocol between two parties (e.g., client and server, or user and identity provider).

Its main roles are:

  • Partnering: In security proofs, it defines which two protocol executions “belong together” — they must output the same SI and the same session key to be considered correctly partnered.
  • Session management: Allows the server (or identity provider) to track or resume a session without repeating the full authentication.
  • Resumption/abbreviation: In practical systems like TLS or OpenID Connect, the SI lets a client later resume the same session (reusing cryptographic material) with a shorter handshake.
  • Logout/revocation: In identity systems, the SI can identify exactly which user session to terminate (e.g., single sign-out).

How SI is constructed – differences across schemes

  • Symmetric AKE (pre-shared key, PSK-based):
    • SI is often derived from nonces exchanged by both parties or from data inside an encrypted/authenticated message.
    • Example: In TLS-PSK or Kerberos, the session/ticket contains an identifier that both sides can compute or extract from the shared secret.
  • Asymmetric AKE (public keys, certificates):
    • SI can be explicitly chosen by one party (usually the server) and sent in clear, or derived from the transcript (all messages exchanged).
    • Classic example: TLS 1.2 – the server picks an opaque Session ID (0–32 bytes) and sends it in ServerHello.
  • 1-step (one-round) AKE:
    • Only one message is sent (often unilateral authentication).
    • SI is usually based on that single message + static identities, or a server-generated value returned in the response.
  • 2-step (two-round) AKE:
    • Both parties send messages and contribute fresh randomness.
    • SI is typically the concatenation of the two main flows (or a hash of them), ensuring both parties compute the exact same value. This gives strong partnering in security models.

In modern security models (eCK, Game-based models), the sid is formally defined as something both honest parties can compute identically from the transcript, so an adversary cannot force mismatched sessions.

Connection to your work (certificates, identity, credentials)

In systems that issue or present digital credentials:

  • The authentication phase that protects the issuance or presentation often uses an AKE under the hood (usually TLS + OpenID Connect or mutual TLS).
  • The SI (or equivalent session handle) is what lets the identity provider correlate the authenticated user with the credential issuance request, and later invalidate only that specific session if needed.

Practical part – real examples

  1. TLS 1.2 Session ID (classic explicit SI) In a Wireshark capture of a TLS 1.2 handshake, look at the Server Hello message:
    • Field: “Session ID Length” followed by the Session ID bytes (e.g., 32 random bytes chosen by the server). If the client wants to resume, it sends the same Session ID in its ClientHello, and the server can skip certificate exchange and key derivation.
  2. OpenID Connect – the “sid” claim (very common in identity systems) An ID Token issued by an OpenID Provider may contain:{ "iss": "https://idp.example.com", "sub": "user-123", "aud": "client-abc", "exp": 1735857600, "iat": 1735854000, "sid": "08a5f3c8-7d4e-4f2a-9b0e-1234567890ab" }The sid value is the session identifier at the OP. It is included in Logout Tokens so relying parties can tell the provider exactly which user session to terminate during back-channel logout.
  3. Quick code snippet – extracting Session ID from a TLS handshake (Python + ssl)import ssl import socket from ssl import SSLContext hostname = 'www.example.com' context = SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations('/path/to/trust/store') # or use default with socket.create_connection((hostname, 443)) as sock: with context.wrap_socket(sock, server_hostname=hostname) as ssock: # After handshake, get the session ID (TLS 1.2 style) session_id = ssock.session.id if hasattr(ssock.session, 'id') else None print("Session ID (hex):", session_id.hex() if session_id else "None (TLS 1.3 or no resumption)")Run this against a server that still supports TLS 1.2 session IDs and you’ll see the server-chosen SI printed.

Takeaway for today: Whenever you see “SI” or “sid” in protocol specs or security proofs related to authentication, it almost always means Session Identifier — the glue that ties two protocol runs together and enables resumption/management features critical to real-world identity systems.

TLS-PSK

What is TLS-PSK?

TLS-PSK (Pre-Shared Key) is a family of TLS authentication and key exchange modes that use a symmetric key shared in advance between client and server instead of (or in addition to) public-key certificates. It is a symmetric authenticated key exchange (AKE) because both parties prove possession of the same secret key, achieving mutual authentication without asymmetric cryptography in its pure form.

Key use cases in your domain (identity & credentials):

  • Constrained devices (IoT, embedded) where certificate validation is too heavy.
  • Closed ecosystems where keys can be provisioned out-of-band (e.g., device manufacturing).
  • Session resumption (very common) – a PSK derived from a previous full handshake to make subsequent connections faster and lighter.
  • Some credential issuance/presentation protocols use PSK-based channels for efficiency.

How TLS-PSK works – main variants

  1. TLS 1.2 PSK (legacy, “pure” external PSK)
    • Separate PSK cipher suites (e.g., TLS_PSK_WITH_AES_128_GCM_SHA256).
    • Client sends ClientKeyExchange containing a PSK identity (a hint, often opaque or human-readable).
    • Both sides compute the master secret as: master_secret = PRF(pre_master_secret, "master secret", client_random + server_random) where pre_master_secret is derived directly from the PSK (no Diffie-Hellman).
    • Optional: PSK can be combined with DHE/RSA for forward secrecy.
    • Symmetric authentication: the ability to derive correct finished messages proves knowledge of the PSK.
  2. TLS 1.3 PSK (modern, integrated)
    • No separate cipher suites – PSK is a key exchange mode selected in the KeyShare extension.
    • Three sub-modes:
      • PSK-only (external): Pure symmetric, no forward secrecy. Rare in practice.
      • PSK-DHE: PSK for authentication + ephemeral Diffie-Hellman for forward secrecy (recommended).
      • PSK-only (resumption): Most common real-world use – the PSK is derived from a previous full (certificate-based) handshake.
    • Handshake flow (simplified for resumption PSK-DHE):
      1. ClientHello: offers one or more pre_shared_key extensions with PSK identities (ticket or opaque binder) + a KeyShare (ephemeral DH).
      2. Server selects one PSK and sends NewSessionTicket (for future resumptions) + its own KeyShare.
      3. Both derive session keys from the PSK + DH shared secret + transcript.
    • Binders: Client includes a cryptographic binder (HMAC over transcript using the offered PSK) to prevent downgrade attacks.

Symmetric vs Asymmetric in TLS context

AspectCertificate-based (asymmetric)PSK-based (symmetric)
AuthenticationPublic keys + certificates (PKI)Shared secret key
Key distributionTrust anchors, revocation checksOut-of-band provisioning
Computational costHigher (signature verification)Lower (symmetric ops only)
Forward secrecyPossible with DHE/ECDHEOnly if combined with (ECDHE)
Typical usePublic web, general identityIoT, resumption, closed systems
Session resumptionSession ID (TLS 1.2) or ticketsPSK tickets (preferred in TLS 1.3)

Connection to Session Identifier (SI) from yesterday

In TLS-PSK resumption:

  • The PSK identity in the pre_shared_key extension acts as the session identifier.
  • For resumption tickets (most common), the identity is the opaque ticket itself (sent in clear), which the server uses to look up the encrypted PSK state.
  • This is exactly the “SI” concept: a value that uniquely identifies the session and allows the server to resume it without repeating full authentication.

Practical part – real examples you can try

  1. List current PSK-related cipher suites on a server Use OpenSSL (most Linux/macOS machines have it):openssl ciphers -v | grep -i PSKYou’ll see legacy TLS 1.2 suites like:TLS_PSK_WITH_AES_128_CBC_SHA TLSv1.2 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1In TLS 1.3, PSK is not listed as separate ciphers – it’s a mode.
  2. Force a TLS 1.3 resumption handshake with PSK (using curl) First full handshake (creates ticket), then resumption:# Full handshake (server sends ticket) curl https://example.com --tlsv1.3 --tls-max 1.3 # Subsequent call often resumes automatically if server supports tickets curl https://example.com --tlsv1.3 --tls-max 1.3 -v 2>&1 | grep "Reused.*PSK"Look for output like “Reusing existing connection... with PSK” or in verbose mode “TLSv1.3 (OUT), TLS handshake, Finished”.
  3. Wireshark view of PSK identity (resumption ticket) Capture a TLS 1.3 resumption to a site like cloudflare.com:
    • Filter: tls.handshake.type == 1 (ClientHello)
    • Expand Pre-Shared Key Extension → identities → you’ll see the opaque ticket bytes (the SI) sent in clear.

Takeaway for today: TLS-PSK is the practical symmetric AKE inside TLS, especially dominant for session resumption. When you design credential systems that run over TLS, resumption PSKs are what make repeated authentications cheap and fast, and the PSK identity/ticket is the concrete “SI” that ties sessions together.

TLS diagramThought for 33s

Visualizing the TLS Handshake

The TLS handshake is the core Authenticated Key Exchange (AKE) we’ve been discussing: it establishes authenticated, confidential channels, derives session keys, and enables features like session identifiers (SI) and PSK resumption.

Diagrams make the message flows (and differences between versions) much clearer than text alone.

Key points connecting to what we covered:

  • TLS 1.2 (legacy but still common): Multi-round (usually 2-RTT), explicit Session ID field in ServerHello for resumption (that’s the classic SI you asked about first). Resumption can abbreviate the handshake.
  • TLS 1.3 (current standard): Streamlined to 1-RTT for full handshakes, encryption starts earlier, no explicit Session ID field — instead uses opaque session tickets (sent via NewSessionTicket post-handshake) that contain an encrypted PSK for resumption. This is the modern symmetric resumption path we discussed in TLS-PSK.
  • PSK in TLS 1.3: For resumption, the client sends a pre_shared_key extension in ClientHello containing the ticket (the SI) from a prior connection. Server looks it up, both derive keys — often combined with ephemeral DH for forward secrecy. This makes repeat connections very fast (1-RTT or even 0-RTT with risks).

Here are clear, high-quality diagrams showing the flows:

Understanding HTTPS principles, SSL/TLS protocols in detail - SoByte

sobyte.net

Understanding HTTPS principles, SSL/TLS protocols in detail - SoByte

Side-by-side comparison of TLS 1.2 vs TLS 1.3 full handshakes, with real-world latency impact (notice how TLS 1.3 reduces round trips).

TCP and TLS handshake: What happens from typing in a URL to ...

medium.com

TCP and TLS handshake: What happens from typing in a URL to ...

Another clean comparison highlighting the simplified message flow in TLS 1.3.

What happens in a TLS handshake? | SSL handshake | Cloudflare

cloudflare.com

What happens in a TLS handshake? | SSL handshake | Cloudflare

Detailed TLS 1.3 handshake (certificate-based) — note how application data can flow right after the client’s Finished message (1-RTT).

How the SSL/TLS Handshake Works: A Modern Guide

linkedin.com

How the SSL/TLS Handshake Works: A Modern Guide

Modern view of TLS 1.3 emphasizing forward secrecy via ephemeral Diffie-Hellman (common in practice).

Practical part – see it yourself

To spot the SI/PSK identity in a real handshake:

  1. Install Wireshark.
  2. Capture traffic to a site like cloudflare.com (supports TLS 1.3 tickets).
  3. First visit → full handshake.
  4. Refresh quickly → look for resumption:
    • Filter: tls.handshake.type == 1 (ClientHello)
    • Expand “Extension: pre_shared_key” → you’ll see the opaque ticket bytes (the session identifier / PSK identity) sent in clear.

Or quick code check (Python, same as before but force resumption):

import ssl
import socket

hostname = 'www.cloudflare.com'
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.check_hostname = False  # for demo only
context.verify_mode = ssl.CERT_NONE

with socket.create_connection((hostname, 443)) as sock:
    with context.wrap_socket(sock, server_hostname=hostname) as ssock:
        print("TLS version:", ssock.version())
        print("Cipher:", ssock.cipher())
        # On second run quickly, you'll often see resumption active

Run twice in a row — many servers will resume using PSK tickets automatically.

Takeaway for today: The handshake diagrams show exactly where authentication, key derivation, and session resumption (via SI or PSK ticket) happen — critical for any identity/credential system running over TLS.

16 sources

cyber-posts

CVC example

Example CVC from BSI TR-03110 (Public/Test Patterns)

BSI TR-03110 defines the CVC format precisely for use in ePassports (EAC) and eID systems, but BSI does not publish raw CVC test files directly on their site (they focus on test CSCA X.509 roots and interoperability guidelines). Instead, public test CVCs are widely available in open-source implementations and interoperability test suites that follow TR-03110 exactly.

These test certificates typically use the fictional country code "UT" (Utopia) or "DETEST" to avoid conflicting with production keys. They form short chains: CVCA → DV → Terminal (IS).

Libraries like EJBCA (with CVC plugin), OpenPACE, pycvc (Python), and JMRTD (Java) generate or include compliant test CVCs for developers.

Typical Parsed Test CVCA Example (Self-Signed Root)

Here is a realistic parsed example of a test CVCA CVC (based on common public test patterns from open-source TR-03110 implementations):

  • Profile Identifier (tag 0x5F29): 00 (initial profile)
  • Certificate Authority Reference (CAR) (tag 0x42): UTCVCA00001 (ASCII, identifies the issuer)
  • Public Key (tag 0x7F49): Nested TLV
  • OID: id-TA-ECDSA-SHA-256 (1.0.36.3.3.2.8.1.1.7 for brainpoolP256r1)
  • Domain parameters: BrainpoolP256r1 reference
  • Uncompressed point: 04 || X (32 bytes) || Y (32 bytes) – e.g., a test key point like 04 6B17D1F2... (full hex varies by generated key)
  • Certificate Holder Reference (CHR) (tag 0x5F20): UTCVCA00001 (same as CAR for root/self-signed)
  • Effective Date (tag 0x5F25): 100101 (YYMMDD → 2010-01-01)
  • Expiration Date (tag 0x5F24): 251231 (2025-12-31)
  • Signature (tag 0x5F37): ECDSA-SHA-256 signature (r || s, ~64 bytes)

No CHAT (0x7F4C) or extensions (0x65) in a basic CVCA.

The signed body is the raw concatenation of all TLV objects from 0x5F29 to 0x5F24. This keeps parsing simple on cards (no full ASN.1 parser needed).

Quick Comparison to X.509 (Connecting Dots)

Field/AspectX.509 (ASN.1 DER)CVC (BER-TLV)
Issuer/SubjectFull DN in complex ASN.1Short printable strings (CAR/CHR)
Public KeySubjectPublicKeyInfo with OID0x7F49 with nested OIDs and point
ValidityUTCTime fields in TBSCertificateSimple YYMMDD strings
AuthorizationExtensions (EKU, SAN)Dedicated CHAT (bitmask for roles)
Signed PortionOnly TBSCertificateAll TLVs except signature
SizeOften 500–1000+ bytesTypically 200–400 bytes (card-optimized)

CVC trades flexibility for size and parsing speed – perfect for offline verification on the chip.

Practical Part: Get and Inspect a Real Test CVC

  1. Install a library to generate/view one (10-minute task):
  • Python: pip install pycvc (implements TR-03110 fully).
  • Or clone https://github.com/frankmorgner/openpace and build cvc-print.
  1. Example with pycvc (run this to create a real test CVCA):
   from pycvc.cvc import CVCertificateBuilder, ECCurve
   from ecdsa import SigningKey, NIST256p  # Or Brainpool

   # Simple test self-signed CVCA
   sk = SigningKey.generate(curve=NIST256p)  # Use Brainpool for full TR-03110 compliance
   vk = sk.verifying_key

   builder = CVCertificateBuilder()
   builder.set_car("UTCVCA00001")
   builder.set_chr("UTCVCA00001")
   builder.set_public_key(vk)
   builder.set_effective_date("100101")
   builder.set_expiration_date("251231")

   cvc = builder.build(sk)  # Signs it
   print(cvc.to_der().hex())  # Outputs full hex of the CVC

This gives you a real DER-encoded CVC in hex. Dump it to file: with open('test_cvca.cvc', 'wb') as f: f.write(cvc.to_der())

  1. Simple TLV parser to inspect any CVC hex (extend for nested public key):
   def parse_simple_tlv(data: bytes):
       pos = 0
       fields = {}
       while pos < len(data):
           tag = data[pos]
           pos += 1
           len_byte = data[pos]
           pos += 1
           if len_byte & 0x80:  # Long form (rare in CVC)
               continue  # Skip for simplicity
           length = len_byte
           value = data[pos:pos + length]
           pos += length
           if tag == 0x42:
               fields['CAR'] = value.decode('ascii')
           elif tag == 0x5F20:
               fields['CHR'] = value.decode('ascii')
           elif tag == 0x5F25:
               fields['Effective'] = value.decode('ascii')
           elif tag == 0x5F24:
               fields['Expiration'] = value.decode('ascii')
           # Add 0x7F49 handling recursively for pubkey
       return fields

   # Use with your generated hex
   hex_cvc = "your_hex_here_without_spaces"
   print(parse_simple_tlv(bytes.fromhex(hex_cvc)))

Run the builder → get real hex → parse it. You'll see the exact fields match the example above.

This connects directly to signature verification: next time, we can add ECDSA verify on the body bytes using the public point from 0x7F49.

Spend your 10 minutes generating one with pycvc – it's the fastest way to have a real TR-03110-compliant CVC on your machine.

cyber-posts

Signatures Certificates

Digital Signatures in Certificates and Credentials: X.509 vs. CVC

Digital signatures are the core mechanism that makes certificates trustworthy in identity and credential systems. They ensure authenticity (the certificate really comes from the claimed issuer), integrity (the content hasn’t been tampered with), and non-repudiation (the issuer can’t deny having issued it).

How a Digital Signature Works (General Principle)

  1. The issuer creates the certificate data (subject identity, public key, validity dates, authorizations, etc.).
  2. This data (or a specific “to-be-signed” portion) is hashed using a cryptographic hash function (e.g., SHA-256 or SHA-3).
  3. The hash is encrypted with the issuer’s private key → this encrypted hash is the signature.
  4. The signature is attached to the certificate.
  5. To verify:
  • Compute the hash of the received certificate data.
  • Decrypt the signature using the issuer’s public key.
  • Compare the two hashes. If they match → valid signature.

This is the same fundamental process for both X.509 and CVC, but the structure, encoding, and use cases differ.

X.509 Certificates

  • Standard: ITU-T X.509 (most widely used public key certificate format).
  • Use cases in your domain: TLS/SSL server certificates, client certificates, code signing, S/MIME email, enterprise PKI for user authentication, many verifiable credential systems (when using traditional PKI).
  • Structure relevant to signatures:
  • Three main parts:
    1. TBSCertificate (To Be Signed Certificate) – contains version, serial number, issuer, subject, public key, validity, extensions, etc.
    2. signatureAlgorithm – identifies the algorithm (e.g., sha256WithRSAEncryption, ecdsa-with-SHA256).
    3. signatureValue – the actual signature bits (the encrypted hash of the TBSCertificate).
  • Encoding: ASN.1 DER (strict binary encoding).
  • Signature process:
  • Hash only the TBSCertificate (not the signature fields).
  • Sign with issuer’s private key.
  • Common algorithms: RSA-PKCS#1 v1.5, RSA-PSS, ECDSA (with NIST or Brainpool curves).
  • Libraries you’ll use: OpenSSL, Bouncy Castle, Java’s java.security.cert, Python’s cryptography or pyOpenSSL.

Card Verifiable Certificates (CVC)

  • Standard: Defined in ISO/IEC 7816-8 and BSI TR-03110 (German Federal Office for Information Security).
  • Use cases in your domain: Smart card-based identity systems, especially European eID cards, ePassports (EAC – Extended Access Control), government-issued credentials where the card itself performs verification (offline, constrained environment).
  • Key differences from X.509:
  • Designed for resource-constrained devices (smart cards).
  • Focus on role-based authorization rather than general identity.
  • Shorter chain length, often self-described references (CAR = Certificate Authority Reference, CHR = Certificate Holder Reference).
  • Structure relevant to signatures:
  • TLV (Tag-Length-Value) encoding (BER-TLV, more flexible than DER).
  • Profile Identifier, CAR, Public Key, CHR, Certificate Holder Authorization Template (CHAT – defines roles/permissions), validity dates, optional extensions, and outer signature.
  • No separate TBSCertificate block – the signature is over the entire certificate body (all fields except the signature itself).
  • Signature process:
  • Concatenate the body fields in defined order.
  • Hash and sign with issuer’s private key (almost always ECDSA for performance on cards).
  • Common curves: BrainpoolP256r1, BrainpoolP384r1, or NIST P-256.
  • Chain: Starts from a root CVCA (often linked to an X.509 CSCA), then DV (Document Verifier) certificates, then terminal/IS certificates – all in CVC format after the root.
  • Libraries/tools: Less common than X.509. Often BSI libraries, OpenSC, or custom implementations in JavaCard/GlobalPlatform environments.

Quick Comparison Table

AspectX.509CVC (Card Verifiable Certificate)
Primary UseGeneral PKI, web, enterpriseSmart cards, eID, ePassport EAC
EncodingASN.1 DER (strict)BER-TLV (flexible)
Signed DataTBSCertificate onlyEntire body except signature
Typical AlgorithmsRSA or ECDSAAlmost always ECDSA (card-friendly)
Authorization ModelExtensions (e.g., SAN, EKU)CHAT field (role-based, bitmask)
Chain LengthCan be longUsually very short (1–3 levels)
Verification LocationAnywhere (online/offline)Often on the card itself (offline)
Standard BodiesITU-T, IETF (RFC 5280)ISO/IEC 7816, BSI TR-03110

Why This Matters for Your Job

  • In identity/credential systems, you’ll often need to issue, verify, or chain both types.
  • Modern systems sometimes bridge them: an X.509 CSCA root signs a CVC chain for ePassport access control.
  • When implementing verification services, remember that CVC requires stricter parsing of TLV structures and specific OID handling.
  • Performance tip: ECDSA verification is much faster than RSA on constrained devices → prefer ECDSA when designing new credential formats.

Spend your 10 minutes today mentally walking through a verification flow: take a real client certificate (X.509) in your browser, view it, and trace the signature fields. Then look up a BSI TR-03110 example CVC (public test data exists) and compare the structure.

Next time you ask about a related topic (e.g., specific algorithms, revocation, or verifiable credentials with JSON-LD signatures), I’ll connect it back to this foundation.

Uncategorized

bash alias

Option 3: Most comfortable (long-term) – SSH key + small alias on server

  1. Set up SSH key authentication (so no password typing for SSH)On Windows (PowerShell):PowerShellssh-keygen -t ed25519 -C "your@email.com" # press enter for defaultsThen copy the public key to server:PowerShelltype $HOME\.ssh\id_ed25519.pub | ssh FLOXII@xx.xx.xxx.xx "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"→ from now on ssh FLOXII@xx.xx.xxx.xx works without password
  2. Create aliases on the server (you only do this once)SSH to server:Bashssh FLOXII@xx.xx.xxx.xxThen edit ~/.bashrc:Bashnano ~/.bashrcAdd at the end:Bashalias dev='sudo docker exec -it data_tool_dev-nextjs_test-1 sh' alias dev-restart='sudo docker exec -it data_tool_dev-nextjs_test-1 sh -c "kill -SIGTERM 1 2>/dev/null || true && npm install && echo \"Starting Next.js...\" && npm start"' alias dev-logs='sudo docker logs -f data_tool_dev-nextjs_test-1'Save and reload:Bashsource ~/.bashrc
  3. Now from Windows you can run:PowerShellssh FLOXII@xx.xx.xxx.xx dev→ directly inside container (no sudo password needed if you configure sudoers — but even without it's only one password)Or restart + install in one go:PowerShellssh FLOXII@xx.xx.xxx.xx dev-restart

Uncategorized

How to Publish a NuGet Package for Your Next.js Project NPM

Publishing a NuGet package for your Next.js project lets you share reusable React components or utilities with the world (or your team). Here’s a quick guide on what a NuGet package is, why you’d want to create one, and how to publish and consume it.

What is a NuGet Package?

A NuGet package is a shareable bundle of code hosted on npmjs.com. For a Next.js project, this typically includes React UI components or utility functions, as Next.js-specific features like API routes or file-system routing can’t be packaged due to their server-side nature.

Why Create a NuGet Package?

  • Reusability: Share components or helpers across multiple projects.
  • Collaboration: Allow others to use your code via a simple npm install.
  • Versioning: Manage updates and dependencies with semantic versioning.

How to Create and Publish a NuGet Package

1. Set Up Your Project

Create a Next.js project (or use an existing one) and focus on components or utilities. For example, a library of reusable React components.

npx create-next-app@latest my-lib
cd my-lib

2. Configure package.json

Ensure your package.json has the correct metadata. The package name must include your npm scope (e.g., @yourname/mylib). Example:

{
  "name": "@flowiso/mylib",
  "version": "1.0.0",
  "main": "dist/index.js",
  "repository": {
    "type": "git",
    "url": "git+https://github.com/flowxcode/mylib.git"
  },
  "scripts": {
    "build": "tsc"
  }
}

Note: The package name (e.g., @yourname/mylib) doesn’t need to match your GitHub repo name. The repository field links to your GitHub repo for metadata.

3. Build Your Library

Use TypeScript or a bundler like Rollup to compile your code into a dist folder. For a simple setup, add a tsconfig.json and run:

npm install typescript --save-dev
npx tsc --init
npm run build

4. Log In to npm

Ensure you’re logged into npm with the correct account matching your scope:

npm login
npm whoami

Verify it returns yourname (e.g., flowiso).

5. Publish to npm

For scoped packages (e.g., @yourname/mylib), you must specify public access:

npm publish --access public

Your package is now live at npmjs.com/@yourname/mylib! 🎉

Pro Tip: Test locally before publishing to avoid version bumps:

npm pack
npm install ./yourname-mylib-1.0.0.tgz

⚠️ Next.js Limitation

You can’t package Next.js API routes or file-system routing. Stick to React components or utilities.

How to Consume Your Package

In another project, install your package:

npm install @yourname/mylib

Import and use it in your code:

import { MyComponent } from '@yourname/mylib';

my pkg

https://www.npmjs.com/settings/flowiso/packages

https://github.com/flowxcode/njs-core

Uncategorized

git screwing with my CR LF

Your tools aren’t “screwing” with your file on a whim—they’re all following their configured rules for handling line endings. The core issue is that your Git settings and your editors’ settings are misaligned. Here’s the blunt truth:

  1. Git’s Role (core.autocrlf & .gitattributes):
    • Git stores files in the repo with LF by default. On Windows, if you’ve set core.autocrlf to true, Git should convert LF to CRLF on checkout and the reverse on commit.
    • If your core.autocrlf is set to false or input, Git won’t do any conversion on checkout, so you might end up with LF files on disk even on Windows.
    • A missing or misconfigured .gitattributes file can also lead to inconsistent behavior across different systems. Without it, Git might not enforce a consistent line ending policy.
  2. Editor Behavior:
    • VS Code: It can automatically set the end-of-line sequence based on its files.eol setting. If that’s set to \n (LF), then when you save the file, VS Code will force LF—even if the file originally came in as CRLF.
    • Notepad++: It shows what’s on disk. If it’s reading CRLF, that suggests one tool somewhere is converting your LF to CRLF on disk (or vice versa). Sometimes Notepad++ might be reporting what it “expects” based on its default Windows behavior.
    • GitKraken & Visual Studio: They generally rely on Git’s configuration. So if Git is normalizing to LF internally, these tools might show LF even though the file on disk might be CRLF when inspected in another editor.
  3. What’s Happening in Your Case:
    • You see the file as CRLF in Notepad++ but as LF in your local Git checkout (and when viewed in VS Code). That’s because somewhere in your workflow Git is normalizing the file to LF (likely because the repo has LF and your Git config isn’t set to convert back to CRLF on checkout).
    • When you save the file in VS Code, it’s saved with LF (because of its settings), so Git sees a complete line-ending change compared to the version stored with CRLF on origin or on other computers. This discrepancy makes Git mark the entire file as changed.
  4. How to Fix It:
    • Align Git Settings: Check your Git config with git config --get core.autocrlf. On Windows, if you want CRLF in your working directory, set it to true. If you prefer LF everywhere, set it to input (but then you might see these differences on other Windows machines expecting CRLF).
    • Use a .gitattributes File: Create a .gitattributes file in your repo root and add rules like:pgsqlCopy*.json text eol=crlf orarduinoCopy* text=auto This forces Git to treat files the same way on all systems.
    • Synchronize Editor Settings: Ensure VS Code’s files.eol (and similar settings in other editors) matches what you want. For CRLF on Windows, set it to \r\n.

Bottom Line:
It’s not GitKraken acting on its own; it’s your configuration mismatches. Git is doing exactly what it’s been told—storing files with LF in the repo and converting on checkout based on your settings. Your editors then might override or display differently based on their own configurations. Get your Git settings and your editors in sync (using a proper .gitattributes file is often the best solution), and the headache will be over.

Fuck line endings—sort out your configs, and you’ll be golden.

thx

phase2:

Alright, listen up. If you want every damn file—every single text file—in your working copy to be CRLF, you need to force Git to treat all files as text and then tell it to check them out with CRLF line endings. But let me be clear: Git is designed to store text files normalized (LF) in the repo and only convert on checkout. There's no magic switch to store them as CRLF inside the Git objects, but your working directory can have CRLF everywhere.

Here's how to force it:

  1. Create or Edit Your .gitattributes File:
    In the root of your repo, create (or modify) a file called .gitattributes with this single line:arduinoCopy* text=auto eol=crlf This line tells Git to treat every file it can as text and, on checkout, convert line endings to CRLF.Warning: This only applies to files Git considers text. For files that Git heuristically thinks are binary, conversion won’t happen. Forcing binary files to convert is a disaster waiting to happen.
  2. Force Git to Reapply Conversion:
    If your files were already checked out before setting up your .gitattributes, Git isn’t going to magically convert them on its own. You need to force a re-checkout:bashCopygit rm --cached -r . git reset --hard This removes everything from the index and then checks them out again—now obeying your .gitattributes rules (and your core.autocrlf=true setting).
  3. Double-Check Your Editor Settings:
    • VS Code: Ensure your settings are not undoing your hard work. In your settings.json, set:jsonCopy"files.eol": "\r\n"
    • Notepad++/VS: Verify they’re not overriding the file’s CRLF status on save.
  4. Understand the Limitations:
    • Internal Repo Storage: Git will always store text files in normalized LF in its internal storage. You’ll see CRLF only in your working directory.
    • Binary Files: If there are files that are not meant to be text (like images or compiled binaries), leave them alone. Forcing a conversion on them could corrupt them.

Bottom line:
Create a .gitattributes file with * text=auto eol=crlf, force a full re-checkout, and make sure your editors are set to use CRLF. That’s the blunt truth—get your configs in sync and you’ll see CRLF everywhere you want.

smartcard world

What we know about ISD AID

In the context of JavaCard technology, ISD AID refers to the Issuer Security Domain Application Identifier.

Key Concepts:

  1. JavaCard:
    • JavaCard is a technology that allows Java-based applications (applets) to run on smart cards and similar secure devices. It provides a secure environment where applets can be executed securely, and it typically includes functionalities like cryptographic operations, secure storage, and communication protocols.
  2. AID (Application Identifier):
    • An AID is a unique identifier used to distinguish different applications (or applets) on a smart card. It's a sequence of bytes that uniquely identifies an applet on the JavaCard platform.
  3. ISD (Issuer Security Domain):
    • The Issuer Security Domain is a special applet on a JavaCard that acts as the security anchor for the card issuer. It is responsible for managing keys, loading and managing applets, and securing communications. The ISD essentially represents the card issuer's control over the card.
  4. ISD AID:
    • The ISD AID is the Application Identifier specifically assigned to the Issuer Security Domain. This AID uniquely identifies the ISD on the card and is used to route commands and manage applets securely within the JavaCard environment.

Functions of the ISD:

  • App Management: The ISD manages the lifecycle of applets on the JavaCard, including their installation, deletion, and personalization.
  • Security Management: The ISD handles the security operations of the card, such as cryptographic key management, secure messaging, and access control.
  • Communication Gateway: The ISD facilitates secure communication between the card issuer and the JavaCard, ensuring that commands are authenticated and authorized.

Importance of ISD AID:

The ISD AID is crucial because it's how the card issuer and external systems can interact with and manage the JavaCard's security domain. When deploying or managing applets, the ISD AID is used to target the ISD for specific commands, ensuring that only authorized operations are performed.

In summary, the ISD AID in JavaCard technology is the unique identifier of the Issuer Security Domain, which is central to managing the security and application lifecycle on the card.

smartcard worldUncategorized

difference between Infineon SLE 36 and SLE 78

The Infineon SLE 36 and SLE 78 series are both families of security microcontrollers designed for secure applications such as smart cards, secure identification, and access control systems. However, they differ in several key aspects:

Security Features

  • SLE 78: This series is known for its advanced security features like Integrity Guard, which provides full encryption of data paths and offers various countermeasures against physical and logical attacks.
  • SLE 36: Generally has basic security features and may not offer the advanced countermeasures against attacks that the SLE 78 series provides.

Cryptographic Support

  • SLE 78: Supports a wide range of cryptographic algorithms including RSA, ECC, DES, and AES.
  • SLE 36: Typically supports basic cryptographic algorithms like DES and 3DES but lacks the extensive cryptographic capabilities of the SLE 78 series.

Security Certification

  • SLE 78: Often certified to higher Common Criteria levels, such as CC EAL 6+, making it suitable for high-security applications.
  • SLE 36: May have some level of security certification but usually not as high as the SLE 78 series.

Processing Speed and Memory

  • SLE 78: Generally offers higher processing speeds and more memory, suitable for applications that require fast data processing and more storage.
  • SLE 36: Typically has less memory and may operate at lower speeds.

Use Cases

  • SLE 78: Because of its advanced features, it's used in high-security applications like electronic passports, secure elements in mobile devices, and secure identification cards.
  • SLE 36: More suited for lower-security applications where cost-effectiveness is a priority but some level of security is still required.

Given your background in security research, understanding these differences could be vital, especially if you're evaluating the security of systems that utilize these microcontrollers. You may find it interesting to examine the trade-offs between security features and performance or cost in these two series.

codedev pot

hot operators

https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/

its the power of operators, so always learn try and play

OperatorsCategory or name
x.yf(x)a[i]x?.yx?[y]x++x--x!newtypeofcheckeduncheckeddefaultnameofdelegatesizeofstackallocx->yPrimary
+x-x!x~x++x--x^x(T)xawait&x*xtrue and falseUnary
x..yRange
switchwithswitch and with expressions
x * yx / yx % yMultiplicative
x + yx – yAdditive
x << yx >> yx >>> yShift
x < yx > yx <= yx >= yisasRelational and type-testing
x == yx != yEquality
x & yBoolean logical AND or bitwise logical AND
x ^ yBoolean logical XOR or bitwise logical XOR
x | yBoolean logical OR or bitwise logical OR
x && yConditional AND
x || yConditional OR
x ?? yNull-coalescing operator
c ? t : fConditional operator
x = yx += yx -= yx *= yx /= yx %= yx &= yx |= yx ^= yx <<= yx >>= yx >>>= yx ??= y=>Assignment and lambda declaration
operators
smartcard world

APDU Lc and Le encoding

Standard ISO_IEC_7816-4-2020

https://www.iso.org/obp/ui/#iso:std:iso-iec:7816:-4:ed-4:v1:en

What we want:

  • we want to test 2 byte Le fields, that means the Lc field should give us already the info that the c r apdu pair is extended. means Lc = 00 and 2 bytes of lenght info.
  • then data, means command data,
  • then 2 bytes of Le encoded

eg. extended length apdu

00CB3FFF
04
5C02DF40
09

eg. normal length apdu

00CB3FFF
000004
5C02DF40
0009

https://stackoverflow.com/questions/40663460/use-apdu-commands-to-get-some-information-for-a-card

smartcard world

smartcard steps

creating objects AMR cmd

  • DataAccessRightTemplate
  • creating All enum values for reducing lines of code
  • object with several keys
  • diff data
  • selection of different levels gd and ad
  • exception catcher for not implemented parts
  • authentication squence more or less unclear
  • changing access rights for according hex tag hex hex

steps contd

  • remove try catch blocks
  • remove pragma directives
smartcard world

smart card encryption FAQs

see a task, pick it, and start by step 0:

IFD

ICC Reader Device (IFD)

http://pcscworkgroup.com/Download/Specifications/pcsc4_v2.01.01.pdf

general authenticate

The GENERAL AUTHENTICATE command is used to establish a Secure Channel session, according to Secure Channel Protocol '03' described in GPCS Amendment D [Amd D].

symmetric key

ISO/IEC 11770-2:2018
https://www.iso.org/standard/73207.html

apdu

https://en.wikipedia.org/wiki/Smart_card_application_protocol_data_unit

RND

RND.IFD and RND.ICC are each 16 Bytes

A.IFD

A.IFD = RND.IFD || RND.ICC etc

Uncategorized

NUnit from scratch

https://github.com/nunit/nunit-vs-templates

same here as below, dont want to install all sorts of things extensions etc to scour through exs

https://github.com/nunit/dotnet-new-nunit
skip this, trust me, it is trash ;) and not compatible to open.

https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-nunit

folllow up and self contained repo

https://github.com/flowxcode/nunit-template

according MS docs to test test the tests

https://nunit.org/nunitv2/docs/2.6.4/quickStart.html

started to debug, decided to analyse templates and MS description. building scratch project to debug and build up proper architecture

/unit-testing-using-nunit
    unit-testing-using-nunit.sln
    /PrimeService
        Source Files
        PrimeService.csproj
    /PrimeService.Tests
        Test Source Files
        PrimeService.Tests.csproj

https://docs.educationsmediagroup.com/unit-testing-csharp/nunit/lifecycle-of-a-test-fixture

Lifecycle of a test fixture

As mentioned before, NUnit gives the developer the possibility to extract all initialization and tear-down code that multiple tests might be sharing into ad-hoc methods.

Developers can take advantage of the following facilities to streamline their fixtures

  • A method decorated with a SetUp attribute will be executed before each test
  • A method decorated with a TearDown attribute will be executed after each test
  • A method decorated with a OneTimeSetUp attribute will be executed before any test is executed
  • A method decorated with a OneTimeTearDown attribute will be executed after all tests have been executed
  • The class constructor will be executed before any method and can be used to prepare fields that shouldn't be modified while executing the tests.

Additionally, developers can set up fixtures contained in a namespace and all its children by creating a class decorated with the attribute SetUpFixture. This class will be able to contain methods decorated with OneTimeSetUp and OneTimeTearDown attributes.

NUnit supports multiple SetUpFixture classes: in this case, setup methods will be executed starting from the most external namespace in and the teardown from the most internal namespace out.

nunit

References:

https://automationintesting.com/csharp/nunit/lessons/whataretestfixtures.html

Uncategorized

secure authentication

read the counter of auth attempts:

https://globalplatform.org/specs-library/card-specification-v2-3-1/

The INITIALIZE UPDATE command is used, during explicit initiation of a Secure Channel, to transmit card and session data between the card and the host. This command initiates the initiation of a Secure Channel Session.

read the counter of apdu.

findings: counter ist increasing in 2 apdu responses

https://www.rapidtables.com/convert/number/hex-to-decimal.html

https://www.scadacore.com/tools/programming-calculators/online-hex-converter/

Uncategorized

NUnit setup tmr for test setup

things tbd tmr , details tobe provided and worked out

  • SetUpAttribute is now used exclusively for per-test setup.
  • TearDownAttribute is now used exclusively for per-test teardown.
  • OneTimeSetUpAttribute is used for one-time setup per test-run. If you run n tests, this event will only occur once.
  • OneTimeTearDownAttribute is used for one-time teardown per test-run. If you run n tests, this event will only occur once
  • SetUpFixtureAttribute continues to be used as at before, but with changed method attributes.
Uncategorized

FactoryPattern adaptions real world

adapt a pattern and build a reald world ex into it

https://github.com/flowxcode/dotnet-design-patterns-samples#factory-method

https://github.com/flowxcode/dotnet-design-patterns-samples/tree/master/Generating/FactoryMethod

  1. build pattern
  2. introduce config
  3. locate point from where to read config in current project
  4. read config early and compare in diagrams

the factory pattern:

https://dotnetcoretutorials.com/2019/10/15/the-factory-pattern-in-net-core/

TestFixtureContextBase.cs
DeviceMapper -> gets a new DeviceFactory creates Devices , Devices inherit from Device class.