Uncategorized

vs code and ProxiLAB

trying SCRIPTIS but debugger is flaky so VS Code got the shot once again.

    error = ProxiLAB.Reader.ISO14443.SendTclCommand(0x00, 0x00, TxBuffer, RxBuffer)
    if (error[0]):
        print("Tcl: {0}".format(ProxiLAB.GetErrorInfo(error[0])))
        PopMsg += "Tcl: {0}".format(ProxiLAB.GetErrorInfo(error[0])) + "\n"
    else:
        print("Tcl response: " + ''.join(["0x%02X " % x for x in RxBuffer.value]))
        PopMsg +=  "Tcl response: " + ''.join(["0x%02X " % x for x in RxBuffer.value]) + "\n"

Tcl response: 0x6A 0x86

Uncategorized

keolabs ProxiLAB Quest

doing some #python tests GetCard SendCommand for ISO/IEC 14443 smartcards.

  • python run and trace in Quest software as well as RGPA software. disadvantage missing debugging comf
  • so moved to VS Code with python and Keolabs lib python file
  • searching for implementation file and possible dlls for c# integration
  • Poller0 PCD proximity coupling device and PICC
  • challenge is to set up full python setup outside of delivered Quest. API functions full details?
https://www.keolabs.com/products/services-accessories/nomad-tester

https://diglib.tugraz.at/download.php?id=5f588b91684cf&location=browse

https://github.com/scriptotek/pyrfidgeek/blob/61595be017fe56f1f668422c15bc50354274a310/rfidgeek/rfidgeek.py#L122

    def inventory_iso14443A(self):
        """
        By sending a 0xA0 command to the EVM module, the module will carry out
        the whole ISO14443 anti-collision procedure and return the tags found.
            >>> Req type A (0x26)
            <<< ATQA (0x04 0x00)
            >>> Select all (0x93, 0x20)
            <<< UID + BCC
        """
        response = self.issue_evm_command(cmd='A0')

        for itm in response:
            iba = bytearray.fromhex(itm)
            # Assume 4-byte UID + 1 byte Block Check Character (BCC)
            if len(iba) != 5:
                logger.warn('Encountered tag with UID of unknown length')
                continue
            if iba[0] ^ iba[1] ^ iba[2] ^ iba[3] ^ iba[4] != 0:
                logger.warn('BCC check failed for tag')
                continue
            uid = itm[:8]  # hex string, so each byte is two chars

            logger.debug('Found tag: %s (%s) ', uid, itm[8:])
            yield uid

            # See https://github.com/nfc-tools/libnfc/blob/master/examples/nfc-anticol.c
Uncategorized

cache hacks

https://dmalcolm.fedorapeople.org/gcc/2015-08-31/rst-experiment/how-to-use-inline-assembly-language-in-c-code.html#clobbers

compiler explorer:
https://godbolt.org/z/kANkNL

void maccess(void *p) { asm volatile("movq (%0), %%rax\n" : : "c"(p) : "rax"); }

moves quadword from mem adress into rax register
AT&T syntax?

shm_open and mmap:

quote: mmap works in multiples of the page size on your system. If you're doing this on i386/amd64 or actually most modern CPUs, this will be 4096.

In the man page of mmap on my system it says: "offset must be a multiple of the page size as returned by sysconf(_SC_PAGE_SIZE).". On some systems for historical reasons the length argument may be not a multiple of page size, but mmap will round up to a full page in that case anyway.

Uncategorized

gradle and antlr for compiler set ups

Commands:
./gradlew compileJava
./gradlew compileTestJava
./gradlew printTree -PfileName=PATH_TO_JOVA_FILE
./gradlew clean

./gradlew compileJava && ./gradlew compileTestJava && ./gradlew printTree -PfileName=PATH_TO_JOVA_FILE && ./gradlew clean
./gradlew printTree -PfileName=main/antlr/at/tugraz/ist/cc/Calc.g4

gradlew permission denied
https://www.cloudhadoop.com/gradlew-permission-denied/

src/test/resources/public/input/lexer

Uncategorized

homomorphic encryption

https://www.techtarget.com/searchsecurity/definition/homomorphic-encryption

quote: "

Homomorphic encryption is the conversion of data into ciphertext that can be analyzed and worked with as if it were still in its original form.  

Homomorphic encryptions allow complex mathematical operations to be performed on encrypted data without compromising the encryption. In mathematics, homomorphic describes the transformation of one data set into another while preserving relationships between elements in both sets.  The term is derived from the Greek words for "same structure." Because the data in a homomorphic encryption scheme retains the same structure, identical mathematical operations -- whether they are performed on encrypted or decrypted data --  will yield equivalent results.

Homomorphic encryption is expected to play an important part in cloud computing, allowing companies to store encrypted data in a public cloud and take advantage of the cloud provider’s analytic services.

Here is a very simple example of how a homomorphic encryption scheme might work in cloud computing:

  • Business XYZ has a very important data set (VIDS) that consists of the numbers 5 and 10.  To encrypt the data set, Business XYZ multiplies each element in the set by 2, creating a new set whose members are 10 and 20.
  • Business XYZ sends the encrypted VIDS set to the cloud for safe storage.  A few months later, the government contacts Business XYZ and requests the sum of VIDS elements.   
  • Business XYZ is very busy, so it asks the cloud provider to perform the operation.  The cloud provider, who only has access to the encrypted data set,  finds the sum of 10 + 20 and returns the answer 30.
  • Business XYZ decrypts the cloud provider’s reply and provides the government with the decrypted answer, 15.

"

there is a python lib PySEAL

https://gab41.lab41.org/pyseal-homomorphic-encryption-in-a-user-friendly-python-package-e27547a0b62f

https://blog.openmined.org/build-an-homomorphic-encryption-scheme-from-scratch-with-python/

https://bit-ml.github.io/blog/post/homomorphic-encryption-toy-implementation-in-python/

Uncategorized

Private Information Retrieval

Additive Secret Sharing

Since all shares (except for one) are chosen randomly, every share is indistinguishable from a random
value and no one can learn anything about a by observing at most n − 1 shares.

Shamir Secret Sharing

Drawback of additive secret sharing is that parties can drop out and fail to provide their share.

-

For both sharing methods, holders of the secret shares can compute linear functions on their shares.

PrivaGram

encode index of the chosen image in a bit string using one-hot encoding
XOR

adding robustness

Shamir secret sharing instead of additive secret sharing

  • robust against server dropping out, k-out-of-l PIR
  • at least t+1 servers are required to reconstruct the secret., t-private-l-server PIR

t-private k-out-of-l PIR protocol

adding homomorphic encryption

collude

final protocol

tbd

Uncategorized

Secure Classification

Secure Multiparty Computation like Yao’s Millionaires’ Problem [Yao82]

SPDZ
http://bristolcrypto.blogspot.com/2016/10/what-is-spdz-part-1-mpc-circuit.html

A secret value x is shared amongst n parties, such that the sum of all shares are equal to x.

  • uniformly at random

adding sec

  • in SPDZ MACs are used to authenticate the shares.
  • global MAC key
  • each party knows a share of the global MAC key

sharing an input value

  • sharing masked version of x
  • each party computes <x>

next:

opening a value
partially
output
directional output
MAC check protocol
coin tossing protocol
commitments

Uncategorized

c++ in docker and VS Code

https://code.visualstudio.com/docs/remote/containers

https://code.visualstudio.com/docs/remote/containers-tutorial

g++ main.cpp -o main.out

mkdir build && cd build && cmake .. && make test
cd build && cmake .. && make test
cmake .. && make test

docker container ls --all
docker exec -it quizzical_banach /bin/bash

docker ps -a
docker start NAME

docker container ls // running

// visual studio notifies if to reopen the workspace in container...

apt list --installed

Uncategorized

Boston Housing Data Analysis

API
https://www.tensorflow.org/api_docs/python/tf/keras/datasets/boston_housing/load_data

Samples contain 13 attributes of houses at different locations around the Boston suburbs in the late 1970s. Targets are the median values of the houses at a location (in k$).

http://lib.stat.cmu.edu/datasets/boston

404×13 = 5252

y_train x_train 404 samples
y_test x_test 102 samples
x 13
y 1 target scalar

y_train, y_test: numpy arrays of shape (num_samples,) containing the target scalars. The targets are float scalars typically between 10 and 50 that represent the home prices in k$.

Uncategorized

conda jupyter tensorflow

found on #stackoverflow
https://stackoverflow.com/a/43259471/1650038

Create a virtual environment - conda create -n tensorflowx

  • conda activate tensorflowx

So then the next thing, when you launch it:

  1. If you are not inside the virtual environment type - Source Activate Tensorflow
  2. Then inside this again install your Jupiter notebook and Pandas libraries, because there can be some missing in this virtual environment

Inside the virtual environment just type:

  1. pip install jupyter notebook
  2. pip install pandas

Then you can launch jupyter notebook saying:

  1. jupyter notebook
  2. Select the correct terminal python 3 or 2
  3. Then import those modules

! start jupy from project folder in Documents

.py files

git init
add commit
git remote add origin  <REMOTE_URL> 
git push remot origin

conda jupyter tensorflow: github condjup is local xfold repo
vs code debug tensorflo: repo deepflo
docker tensorflow image: once again:
docker run -it --rm -v $(realpath ~/notebooks):/tf/notebooks -p 8888:8888 tensorflow/tensorflow:latest-jupyter
https://codeflysurf.com/2021/11/22/running-tensorflow-in-jupyter-notebook-docker/

Uncategorized

A Simple MNIST Example

data analysis and vis in jupyter nb

https://jupyter.org/try
https://hub.gke2.mybinder.org/user/ipython-ipython-in-depth-9if5hwc5/notebooks/binder/Index.ipynb

import tensorflow as tf
print(tf.__version__)

# Load and prepare data, convert labels to one-hot encoding

mnist= tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test= x_train/ 255.0, x_test/ 255.0
y_train= tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test= tf.keras.utils.to_categorical(y_test, num_classes=10)

# Configure the model layers
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
model.add(tf.keras.layers.Dense(100, activation='relu'))
model.add(tf.keras.layers.Dense(50, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.summary()

# Configure the model training procedure
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9),
  loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
  metrics=['accuracy'])
model.fit(x_train, y_train, epochs=20, batch_size=64, validation_split=0.2)
model.evaluate(x_test, y_test, batch_size=64)

Uncategorized

tensorflow pipenv keras

https://github.com/flowxcode/deepflow
https://pipenv.pypa.io/en/latest/
https://pipenv.pypa.io/en/latest/install/

$ pipenv run python main.py

$ pipenv shell

https://stackoverflow.com/a/68673872/1650038

settings.json to pipvenv in home .local

"python.pythonPath": "${env:HOME}/.local/share/virtualenvs/deepflow-eho_wYiM/bin/python"

launch.json

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            //"program": "/home/linx/.local/share/virtualenvs/deepflow-eho_wYiM/<program>",
        }
    ]
}

test x format

import tensorflow as tf
print("TensorFlow version:", tf.__version__)

Uncategorized

the path to neural networks on flask python

https://towardsdatascience.com/a-financial-neural-network-for-quants-45ec0aaef73c

conda init
conda config --set auto_activate_base false
pip install notebook
cd /your/project/root/directory/
jupyter notebook 

code left

c right

!! dont forget about requirements.txt

https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6

https://www.datacamp.com/community/tutorials/lstm-python-stock-market

Uncategorized

virtual env python

different projects, different requirements different envs:
create your virtual environment

python3 -m venv tutorial-env
python -m venv venv
$ . venv/bin/activate

https://docs.python.org/3/tutorial/venv.html

add it in your git repo, .gitignore

echo "venv" >> .gitignore
create requirements.txt:
pip freeze > requirements.txt
git add requirements.txt

https://medium.com/wealthy-bytes/the-easiest-way-to-use-a-python-virtual-environment-with-git-401e07c39cde

pip install -r requirements.txt

https://boscacci.medium.com/why-and-how-to-make-a-requirements-txt-f329c685181e

tia

Uncategorized

AST LHS RHS

https://www.codementor.io/@erikeidt/overview-of-a-compiler-zayyljs2s#evalutation-context-lhs-rhs-branch

https://en.wikipedia.org/wiki/Abstract_syntax_tree

https://en.wikipedia.org/wiki/Parse_tree

citation:

Left Hand Side vs. Right Hand Side

For example, in a = b + c, we evaluate the value of b and c, add them together, and then associate or store the result in a.  Here b and c have the context that we call right hand side — b and c are on the right hand side (of an assignment), whereas a has left hand side context — a is on the left hand side of an assignment.  We need the value of b and c yet the location of a to store the result.

A complex left hand side expression will itself also involve some right hand side evaluation.  For example, a[i] = 5 requires evaluting a and i as values (as if right hand side) and only the array indexing itself is evalutated as left hand side (for storage location).  5, of course, is understood as right hand side.

ethicalh

cache replacement policies

MIN

  • Replace the cache entry that will not be used for the
    longest time into the future
  • Optimality proof based on exchange: if evict an entry
    used sooner, that will trigger an earlier cache miss

Least Recently Used (LRU)

  • Replace the cache entry that has not been used for
    the longest time in the past
  • Approximation of MIN

Least Frequently Used (LFU)

  • Replace the cache entry used the least often (in the
    recent past)