looking for some solutions? You are welcome.

SOLVED: PInvoke WindowsAPI CreateFile from C#


When PInvoking the WindowsAPI CreateFile from a c# program is it best to call the generic CreateFile, ANSI CreateFileA, or the Unicode CreateFileW version?

Each of the API's has a different signature for the relevant CharSet:

// CreateFile generic
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Auto)]
public static extern SafeFileHandle CreateFile (
    [MarshalAs(UnmanagedType.LPTStr)] string lpFileName,

 // CreateFileA ANSI 
 [DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Ansi)]
 public static extern SafeFileHandle CreateFileA (
    [MarshalAs(UnmanagedType.LPStr)] string lpFileName,

// CreateFileW Unicode
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
public static extern SafeFileHandle CreateFileW (
    [MarshalAs(UnmanagedType.LPWStr)] string lpFileName,

According to Microsoft documentation1, for C# the default CharSet is Charset.ANSI. That seems really strange since strings in C# are Unicode. If the documentation is right, it means that CreateFile will ultimately call CreateFileA at runtime (with appropriate conversions to ANSI to and fro along the way).

Another Micrsoft doc2 says, "When the CharSet is Unicode or the argument is explicitly marked as [MarshalAs(UnmanagedType.LPWSTR)] and the string is passed by value (not ref or out), the string will be pinned and used directly by native code (rather than copied)." This seems great for avoiding copying potentially large strings and providing max performance.

Assume that I want to call the CreateFile flavor that works optimally with C# strings, has best performance, minimal casting / translations, works on Windows x64 OS and secondarily has maximum portability.

Approach 1: Call generic CreateFile but change signature to CharSet.Unicode.
This may be a problem because CreateFile marshals the lpFileName as UnmanagedType.LPTStr whereas CreateFileW marshalls it as UnmanagedType.LPWStr. It seems like the marshaling would have to perform conversions? to get the right LP type (more than once). Another inefficiency is that CreateFile would have to call CreateFileW internally. Also, I want to make sure the "pinning" is happening for max performance and I'm not sure that would happen here.

Approach 2: Call generic CreateFile with signature CharSet.Auto This seems to provide maximum portability for target OS but will wind up calling CreateFileA internally which is inappropriate for C# strings (Unicode).

Approach 3: Call CreateFileW directly. This also seems less than optimal because if I am compiling for a different target OS like Win x86 (that uses only ANSI strings) than the program will not be able to run at all.

It seems like Approach 1 would be best but the MarshalAs LPTStr doesn't look right to me (considering that the CreateFileW version marshals as LPWStr).

I would appreciate any help you can give on this. I have been digging through dozens of conflicting webpages and can not find a definitive answer.


1 DllImportAttribute.CharSet Field

2 Native interoperability best practices

3 Copying and Pinning

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: cannot create INSTEAD OF triggers on table, trying to do a manual cascade delete

Jenny Lawrence:

I am trying to have my information stored from a whole database every time a repair job is deleted which means it's child car_problems also needs to be deleted but for some reason it won't let me do instead of delete on repair_job. Basically my main task for this trigger is to log everything that is deleted from repair_job. Here is my code:

 CREATE TABLE repair_job(repairId varchar(5) PRIMARY KEY, licenceNo 
                        contact varchar(25),time_in timestamp,
                        time_out timestamp, employeeId varchar(5), 
                        laborHrs number,
                         FOREIGN KEY(licenceNo) REFERENCES 
                         FOREIGN KEY(contact) REFERENCES 
                         FOREIGN KEY(employeeId) REFERENCES 

    CREATE TABLE car_problems(probId varchar(7), repairId varchar(5), 
                partName varchar(25), partPrice number,
                FOREIGN KEY(probId) REFERENCES problem(probId), 
                FOREIGN KEY(repairId) REFERENCES 
    CREATE TABLE repairLog(repairId varchar(5), licenceNo varchar(7), 
                        model varchar(25), name varchar(25), 
                        address varchar(30), contact varchar(25),
                        probId varchar(7), probType varchar(30),
                        employeeId varchar(5), mechName varchar(25),
                        mechPhone varchar(10), hrlyRate number,
                        partName varchar(25), partPrice number,
                        time_in timestamp, time_out timestamp, 
                        laborHrs number);

        ON repair_job
        FOR EACH ROW

    mod varchar(25);
    addr varchar(30);
    custName varchar(25);
    mechName varchar(25);
    mechPhone varchar(10);
    hrlyRate number;
    partName varchar(25);
    partPrice number;
    probId varchar(7);
    probType varchar(30);

    CURSOR parts is
        SELECT car_problems.partName , car_problems.partPrice,
                car_problems.probId , problem.probType 
                FROM car_problems, problem
                WHERE car_problems.repairId = :old.repairId 
                AND car_problems.probId = problem.probId;


        select model into mod from car where contact = :old.contact;
        select name, address into custName, addr from Customer 
            where contact = :old.contact;
        select mechName, mechPhone, hrlyRate into mechName, mechPhone, 
            from mechanic where employeeId = :old.employeeId;

        OPEN parts;
        FETCH parts into partName, partPrice, probId, probType;
            EXIT WHEN parts%notfound;
            INSERT INTO repairLog(repairId,licenceNo,
                            model, name, address,contact,
                            probId,probType, employeeId,
                            partName,partPrice, time_in, 
                            time_out, laborHrs) 

                            probType, :old.employeeId,
                            mechName, mechPhone, hrlyRate,

        END LOOP;
        CLOSE parts;
    DELETE FROM car_problems where repairId=:old.repairId;
    DELETE FROM repair_job where repairId=:old.repairId;


Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: How to program a UserForm to Update A Monthly Tracker

J Carlos:

I was looking for some help or a walk-through on how to program a userform to update a tracker, based on the Month, Day, Shift, Area. I currently have each month broke up in different tabs. I created the userform already and have tried a few different things that I found online, but can't seem to get it to work.

I attached the excel work-book, but if anyone can explain it to me that would be greatly appreciated.

What the Spreadsheet looks like

What the Spreadsheet looks like

UserForm I created

UserForm I created

Code for the ComboBox

Code for the ComboBox

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: How to create a child process using multiprocessing in Python2.7.10 without the child sharing resources with parent?

Jigar Bhalodia:

We are trying to move our python 2.7.10 codebase from Windows to Linux. We recently discovered that multiprocessing library in Python 2.7 behaves differently on Windows vs Linux. We have found many articles like this one describing the problem however, we are unable to find a solution online for Python 2.7. This is a fix for this issue in Python 3.4 however, we are unable to upgrade to Python 3.4. Is there any way to use multiprocessing in Python 2.7 on Linux without the child and parent sharing memory? We can also use guidance on modifying forking.py code in python 2.7 to ensure child and parent process aren't sharing memory and doing Copy-on-Write. Thanks!

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Changing an image in Django inlineformset_factory puts it to the end of the list


Supposing I am making a "How to" Django webapp where users make posts about how to- do different things like.

  • "How to" make a rope
  • "How to" make an earthen pot
  • "How to" learn to ride a bike

You get the idea. I have made the post create view for this.Now when members make the post.They add additional images to the post

Example: "How to" make a rope

  • This has Post Title = How to make a rope
  • Post description = "Some description"
  • Post Image = Main Image

Now they have to show images step by step how the rope is made

  • Image 1: Do this 1st
  • Image 2: Do this 2nd

I am using Django formsets along with my post model to achieve this. Everything is working absolutely fine in create view. no problems. But in update view things break.

The Problem

The problem is when a user wants to EDIT their post and switch image number 2. from their post to a different image. Even though they changed the 2nd image. That image now ends up at the very end of the list. Making the user to re-upload all the images. To bring back the Order. Making my app look buggy.

Example: Lets assume user has the below post

main post Title 
" Some description "
Main Image = Post_image.jpg  

1st Image = A.jpg
   Image Title
   Image description
2nd Image = B.jpg
   Image Title
   Image description
3rd Image = C.jpg
   Image Title
   Image description
4st Image = D.jpg
    Image Title
     Image description
5th Image = E.jpg
     Image Title
     Image description
6th Image = F.img
     Image Title
     Image description

Now if I changed 2nd image B.jpg to b.jpg b.jpg moves to the very end of the list and you have the order as A, C, D, E, F, b

Below are my models:

 class Post(models.Model):
    user = models.ForeignKey(User, related_name='posts')
    created_at = models.DateTimeField(auto_now_add=True)
    title = models.CharField(max_length=250, unique=True)
    slug = models.SlugField(allow_unicode=True, unique=True,max_length=500)
    post_image = models.ImageField()
    message = models.TextField()

class Prep (models.Model): #(Images)
    post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='post_prep')
    image = models.ImageField(upload_to='images/', blank=True, null=True, default='')
    image_title = models.CharField(max_length=100, default='')
    image_description = models.CharField(max_length=250, default='')

My post Edit View:

class PostPrepUpdate(LoginRequiredMixin, UpdateView):
    model = Post
    fields = ('title', 'message', 'post_image')
    template_name = 'posts/post_edit.html'
    success_url = reverse_lazy('home')

    def get_context_data(self, **kwargs):
        data = super(PostPrepUpdate, self).get_context_data(**kwargs)
        if self.request.POST:
            data['prep'] = PrepFormSet(self.request.POST, self.request.FILES, instance=self.object)
            data['prep'] = PrepFormSet(instance=self.object)
        return data

    def form_valid(self, form):
        context = self.get_context_data()
        prep = context['prep']
        with transaction.atomic():
            self.object = form.save()

            if prep.is_valid():
                prep.instance = self.object
        return super(PostPrepUpdate, self).form_valid(form)

My post create view

def post_create(request):
    ImageFormSet = modelformset_factory(Prep, fields=('image', 'image_title', 'image_description'), extra=12, max_num=12,
    if request.method == "POST":
        form = PostForm(request.POST or None, request.FILES or None)
        formset = ImageFormSet(request.POST or None, request.FILES or None)
        if form.is_valid() and formset.is_valid():
            instance = form.save(commit=False)
            instance.user = request.user
            post_user = request.user
            for f in formset.cleaned_data:
                    photo = Prep(post=instance, image=f['image'], image_title=f['image_title'], image_description=f['image_description'])
                except Exception as e:

            return redirect('posts:single', username=instance.user.username, slug=instance.slug)
        form = PostForm()
        formset = ImageFormSet(queryset=Prep.objects.none())
    context = {
        'form': form,
        'formset': formset,
    return render(request, 'posts/post_form.html', context)

My Forms.py

class PostEditForm(forms.ModelForm):
    class Meta:
        model = Post
        fields = ('title', 'message', 'post_image' )

class PrepForm(forms.ModelForm):
    class Meta:
        model = Prep
        fields = ('image', 'image_title', 'image_description')

PrepFormSet = inlineformset_factory(Post, Prep, form=PrepForm, extra=5, max_num=7, min_num=2)

***Need help fixing this issue. Example if they change Image 2. Then it should stay at Number 2 position and not move to the end of the list

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Click() is not working though there's no any error but its not opening up - (Cypress automation)

Bandana Singh:

last one cy.get('[data-cy=impact-area-table]').contains(impactareas.name).should('be.visible').click({force: true}); is not working though there's no any error ,it shows that it's fine and test pass but it doesnot open up the impact area ??

import { fillImpactAreaForm } from './utils';
import {contact, campaign, impactArea,impactareas} from '../support/commands.js';

describe('Fundraising test suite', function () {

    beforeEach(() => {
it('should allow the user to create transactions', () => {
    cy.seedOrgAndLogin().then(() => {
        return cy.factoryCreate('PersonContacts', contact);

    }).then(() => {
        cy.factoryCreate('Campaigns', campaign);

    }).then(() => {
        cy.factoryCreate('ImpactAreas', impactArea);

    }).then(() => {
        cy.get('[data-cy="sidebar-Impact Areas"]').click({force: true});





        cy. wait(2000);

        cy.get('[data-cy=impact-area-table]').contains(impactareas.name).should('be.visible').click({force: true});
       //cy.get('.content-scroll-wrapper.block-content').find('.content-scroll-body').contains(impactArea.name).click({force: true});



Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: How do i pass result of callback into variable and access the var freely


I know there's a ton of question similar to my question, but i didn't see any good case to help me,

I have a callback from native function bridge and this how i used it on JS:

  console.log(data) // data is Javascript Object

I've tried this to get the value of data:

  return new Promise((resolve)=> resolve(showToken(data.Token)))

async function showToken(token){
  var res = await token
  return res

var isiToken = showToken()

but the result is:

{ _40: 0, _65: 0, _55: null, _72: null }

i don't know whats wrong with my code, i want to get the value of data outside of the getAllParameter, how can i do that properly?

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Why does splitting a string result in Split() not a valid property


I am trying to write one of my first c# scripts for a homeautomation solution (homeseer). I have other issues to resolve with the below code, however the simple line:

String[] parm = line.Split(",");

Results in the error:

Type 'string' does not contain a definition for `Split' and no extension method 'Split' of type 'string' could be found (are you missing a using directive or an assembly reference?)

I will post another question for my other issues

using System;
using System.Collections.Generic;

public void Main(string line)
    String[] parm = line.Split(",");
    var windowDoorOpenVar = hs.GetVar("WindowDoorOpen");
    if (windowDoorOpenVar.Size == 0 || windowDoorOpenVar == null)
        List<string> windowDoorOpen = new List<string>();
        List windowDoorOpen = windowDoorOpenVar;

    switch (parm[0])
        case "Open":
        case "Closed":
    hs.SaveVar("WindowDoorOpen", windowDoorOpen);

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Why does exchanging variable name changes output of C program?


I tried this example of array and post increment/ pre increment on it's elements


int main()
    int j[5] = {5, 1, 15, 20, 25};
    int k , l, m, n;

    n = ++j[1];
    k = ++j[1];
    l = j[1]++;
    m = j[k++];

    printf("\n%d, %d, %d, %d", n, k, l, m );

    return 0;

here the output is :

2, 4, 3, 20

and if i change the order of n and k ie instead of

n = ++j[1];
k = ++j[1];

i write

k = ++j[1];
n = ++j[1];

The output becomes :

3, 3, 3, 15

I tried this on mingw compiler on windows10 and also on Kali Linux's GCC... Same problem.

It is just like taking different variable name alters the output of program. What might be the cause?

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Schedule task using whenver with custom time



def timezoned time
  Time.zone = "Pacific Time (US & Canada)"

every 1.day, at: timezoned('5:30 am') do
  runner 'App1::Task1.perform_async'

every 1.day, at: timezoned('5:31 am') do
  runner 'App1::Task2.perform_async'

every 1.day, at: timezoned('5:33 am') do
  runner 'App2::Task1.perform_async'

every 1.day, at: timezoned('5:34 am') do
  runner 'App2::Task2.perform_async'

every 1.day, at: timezoned('5:36 am') do
  runner 'App3::Task1.perform_async'

every 1.day, at: timezoned('5:37 am') do
  runner 'App3::Task2.perform_async'

every 1.day, at: timezoned('5:40 am') do
  runner 'App4::Task1.perform_async'

every 1.day, at: timezoned('5:41 am') do
  runner 'App4::Task2.perform_async'

every 1.day, at: timezoned('5:42 am') do
  runner 'App4::Task3.perform_async'

every 1.day, at: timezoned('5:43 am') do
  runner 'App5::Task4.perform_async'

I am using Sidekiq with whenever gem to schedule the jobs in Rails. Above is my schedule.rb. Is there way to schedule the tasks in a better way than the above? (e.g time + 1.minute or time + 2.minutes)

It will be helpful if any better approach for scheduling task that the above.

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Failed to process okhttp-3.14.0.jar


Trying to follow along a course on teamtreehouse on building an android weather app. The teacher is able get there app to run inside the emulator and crash, but mine won't even run in the emulator and just gives me a bunch of errors. The teacher has pointed out that there is intentional errors that will be fixed later on.

Heres what some of the errors are.

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Project not compiling after adding androidx.room:room-compiler:2.1.0-alpha05

Kodak Gifted:

Application gradle file and Project gradle file.

Error: cannot find symbol class DataBindingComponent

        implementation 'androidx.room:room-runtime:2.1.0-alpha05'
        annotationProcessor 'androidx.room:room-compiler:2.1.0-alpha05'

    allprojects {
        repositories {
            maven { url "https://jitpack.io" }
            maven { url "https://kotlin.bintray.com/kotlinx/" }

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Vuex commit does not trigger v-show


I'm trying vuex for the first time, I found that a v-show directive is not been triggered after mutation commit on the store

// store.js
import Vue from "vue"
import Vuex from "vuex"

const states = {
    app: {
        init: "APP_INIT"


const store = new Vuex.Store({
    state: {
        appState: ""
    mutations: {
        changeAppState (state, appState) {
            state.appState = appState
    getters: {
        isVisible: state => state.appState === states.app.init
export { store }


<template v-show="isVisible">
    <div id="componentA"></div>

    export default {
        name: "ComponentA",
        computed: {
            isVisible () {
                return this.$store.getters.isVisible
            appState () {
                return this.$store.state.appState
        watch: {
            appState (newVal, oldVal) {
                console.log(`Changed state: ${oldVal} => ${newVal}`)
        mounted () {
            setTimeout(() => {
                this.$store.commit("changeAppState", "APP_INIT")
            }, 1000)

<style scoped lang="scss">
    #componentA {
        width: 400px;
        height: 400px;
        background: red;

I've defined a getter isVisible which should evaluate to true if the state.appState property is equal to the string APP_INIT. I thought that the commit on the mutation will trigger the reactive system and force a re-render of the view, but this is not happening. Why?

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: How to print lines with duplicated fields?

Raj KP:

I need to print lines with duplicated fields, tried using sed it's not working.
Input file has two lines:

s1/s2/s3/s4/s5/u0 a1_b2_c3_d4_e5_f6_g7 s1/s2/s3/s4/s5/u1
s1/s2/s3/s4/s5/u0 a1_b2_c3_d4_e5_f6_g7 s1/s2/s3/s4/s5/u0

Output should be only second line, because it has exact duplicated strings (fields).
But it's printing both lines using below command

sed -rn '/(\b\w+\b).*\b\1\b/ p' input_file


Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: how to set new range variable to one cell address


I have the code below that works (almost)

Sub Find_PhaseCode_Cell2()
    Dim rng As Range
    Dim newrng As Range
    Dim wb As Workbook
    Dim ws As Worksheet
    Set wb = ThisWorkbook
    Set ws = Sheets("Control budget ")
    With ws
        Set rng = Range("b57:b64")
        With rng
            For x = 57 To 64
                If Cells(x, 2).Value <> "" Then
                    Debug.Print Cells(x, 2).Address
                End If
            Next x
        End With
    End With
End Sub

where the debug.print statement is, i would like that to be variable newrng.

if i try to set that set newrng = Cells(x,2).address, I get an error

object required

What am I doing wrong?

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Apply function from external library in pyspark/pandas

Ahmad Suliman:

I have Dataframe contain two columns contain coordinates in the geographic coordinate system latitude and longitude. I need to apply a method in OSMnx library which returns the nearest node and the distance to specified points as follows:

osmnx.utils.get_nearest_node(G, point, method='haversine', return_dist=False)

source: https://osmnx.readthedocs.io/en/stable/osmnx.html?highlight=get_nearest_node#osmnx.utils.get_route_edge_attributes where G is network and driven by using one of the method that existed in this link https://geoffboeing.com/2016/11/osmnx-python-street-networks/

Where my trial is :

udf_node=fn.udf(lambda x,y:ox.get_nearest_node(G, (x,y), return_dist=True)[0],IntegerType())
udf_node_dist=fn.udf(lambda x,y:ox.get_nearest_node(G, (x,y), return_dist=True)[1],FloatType())

df = df.withColumn('node',udf_node(fn.col('longitude'), fn.col('latitude')))
df = df.withColumn('node_dist',udf_node_dist(fn.col('longitude'), fn.col('latitude')))

while calling show function or trying to save the resulting dataframe as parquet I got the following error :

Py4JJavaError: An error occurred while calling o976.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 34.0 failed 1 times, most recent failure: Lost task 0.0 in stage 34.0 (TID 104, localhost, executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)

Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)

How can I apply the external library function in pyspark?


def nearest_node_dist(x,y):
    return ox.get_nearest_node(G, (x,y), return_dist=True) #Output: (nearest_node,dist)

udf_node=fn.udf(lambda x,nearest_node_dist(x,y)[0],IntegerType())
udf_dist=fn.udf(lambda x,nearest_node_dist(x,y)[1],FloatType())

df = df.withColumn('node',udf_node(fn.col('longitude'), fn.col('latitude')))
df = df.withColumn('dist',udf_dist(fn.col('longitude'), fn.col('latitude')))

I also got the same error.

In pandas tits running well for a sample of the dataframe :

where the code as follows:

pddfsample=pddf.head() ## pddf is pandas dataframe

def nearest_node(x,y):
    nearest_node,dist=ox.get_nearest_node(G, (x,y), return_dist=True)
    return nearest_node 
def dist_to_Nnode(x,y):
    nearest_node,dist=ox.get_nearest_node(G, (x,y), return_dist=True)
    return dist 

pddf['nearest_node'] = np.vectorize(nearest_node)(pddf['longitude'],pddf['latitude'])

pddf['dist_to_Nnode'] = np.vectorize(dist_to_Nnode)(pddf['longitude'],pddf['latitude'])   

Although it running well, it took a lot of time for the whole dataframe.

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: How to update a document based on two filter conditions?

That Guy:

I have a document structure that looks like this:

type Document = {
  _id: string
  title: string
  variants: VariantType[]

type VariantType = {
  timestamp: Int
  active: Boolean
  content: any[]

I'm trying to filter a document based on two filter conditions in one query. First I want to match the _id and then find a specific variant based on a timestamp.

My previous version of the query was filtering based on the active key.

const updatedDocument = await allDocuments
        .findOneAndUpdate({ _id: mongoId, 'variants.active': false }, .... };

Changing it to

const updatedDocument = await allDocuments
        .findOneAndUpdate({ _id: mongoId, 'variants.timestamp': timestamp }, .... };

returns null.

Can mongo even match a document like this. I saw that there is an $eq query selector but I can't seem to get it working either.

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Why the utilization of cpu is too high and gpu so low when using keras

zheyuan xu:

Why the utilization of cpu is too high and gpu so low when using keras? Like this picture, enter image description here For every 5 seconds, utilization of gpu is about 80% during 1 second and 0 the rest of time. I use tensorflow.keras.utils.Sequence to load data, and use model.fit_generator to train. I don't kown how to deal with these problems.


def create_callbacks(opt, steps_per_epoch, model=None):
    log_dir = os.path.join(opt.root_path, opt.log_dir)
    if not os.path.exists(log_dir):
    #tensorboard = TensorBoard(log_dir=log_dir, write_graph=True)

    result_path = os.path.join(opt.root_path, opt.result_path)
    if not os.path.exists(result_path):

    if model is not None:
        checkpoint = ParallelModelCheckpoint(model, os.path.join(result_path, 'ep{epoch:03d}-val_acc{val_acc:.2f}.h5'),
                                    monitor='val_acc', save_weights_only=True, save_best_only=True, period=1)
        checkpoint = ModelCheckpoint(os.path.join(result_path, 'ep{epoch:03d}-val_acc{val_acc:.2f}.h5'),
                                    monitor='val_acc', save_weights_only=True, save_best_only=True, period=1)
    early_stopping = EarlyStopping(monitor='val_acc', min_delta=0, patience=10)
    learning_rate_scheduler = SGDRScheduler_with_WarmUp(0, opt.lr, steps_per_epoch, lr_decay=opt.lr_decay, 
                                                        cycle_length=opt.cycle_length, multi_factor=opt.multi_factor,

    print_lr = PrintLearningRate()

    return [learning_rate_scheduler, print_lr, checkpoint, early_stopping]

def train(opt):
    video_input = Input(shape=(None, None, None, 3))
    model = nets.network[opt.network](video_input, num_classes=opt.num_classes)
    print("Create {} model with {} classes".format(opt.network, opt.num_classes))

    if opt.pretrained_weights is not None:
        print("Loading weights from {}".format(opt.pretrained_weights))

    optimizer = get_optimizer(opt)

    train_data_generator = DataGenerator(opt.data_name, opt.video_path, opt.train_list, opt.name_path, 
                                        'train', opt.batch_size, opt.num_classes, True, opt.short_side, 
                                        opt.crop_size, opt.clip_len, opt.n_samples_for_each_video)
    val_data_generator = DataGenerator(opt.data_name, opt.video_path, opt.val_list, opt.name_path, 'val', 
                                        opt.batch_size, opt.num_classes, False, opt.short_side, 
                                        opt.crop_size, opt.clip_len, opt.n_samples_for_each_video)

    callbacks = create_callbacks(opt, max(1, train_data_generator.__len__()), model)

    if len(opt.gpus) > 1:
        print('Using multi gpus')
        parallel_model = multi_gpu_model(model, gpus=len(opt.gpus))
        parallel_model.compile(optimizer=optimizer, loss=categorical_crossentropy, metrics=['accuracy'])
        parallel_model.fit_generator(train_data_generator, steps_per_epoch=max(1, train_data_generator.__len__()),
                            epochs=opt.epochs, validation_data=val_data_generator, validation_steps=max(1, val_data_generator.__len__()),
                            workers=opt.workers, callbacks=callbacks, use_multiprocessing=True)
        model.compile(optimizer=optimizer, loss=categorical_crossentropy, metrics=['accuracy'])
        model.fit_generator(train_data_generator, steps_per_epoch=max(1, train_data_generator.__len__()),
                            epochs=opt.epochs, validation_data=val_data_generator, validation_steps=max(1, val_data_generator.__len__()),
                            workers=opt.workers, callbacks=callbacks, use_multiprocessing=True)
    model.save_weights(os.path.join(os.path.join(opt.root_path, opt.result_path), 'trained_weights_final.h5'))

if __name__=="__main__":
    opt = parse_opts()
    os.environ['CUDA_VISIBLE_DEVICES'] = ",".join(map(str, opt.gpus))

some parameters like this:

    --num_classes=60 \
--workers=4 \
--batch_size=64 \
--crop_size=160 \
--clip_len=32 \
--short_side 192 224 \
--gpus 8 9

some code of my dataloader of keras

import os
import random
import math
import copy
import time
import numpy as np
from tensorflow.keras.utils import Sequence
from .spatial_transforms import RandomCrop, Scale, RandomHorizontalFlip, CenterCrop, Compose, Normalize, PreCenterCrop
from .tempora_transforms import TemporalRandomCrop, TemporalCenterCrop
from .utils import load_value_file, load_clip_video

def get_ntu(video_path, file_path, name_path, mode, num_classes):
    lines = open(name_path, 'r').readlines()

    assert num_classes == len(lines)

    video_files = []
    label_files = []

    for path in open(file_path, 'r'):
        label = int(path.split('A')[1][:3])-1
        video_files.append(os.path.join(video_path, path.strip()))

    return video_files, label_files

def get_ucf101(video_path, file_path, name_path, mode, num_classes):
    name2index = {}

    lines = open(name_path, 'r').readlines()
    for i, class_name in enumerate(lines):
        class_name = class_name.split()[1]

    assert num_classes == len(name2index)

    video_files = []
    label_files = []
    for path_label in open(file_path, 'r'):
        if mode == 'train':
            path, _ = path_label.split()
        elif mode == 'val':
            path = path_label
            raise ValueError('mode must be train or val')
        pathname, _ = os.path.splitext(path)
        video_files.append(os.path.join(video_path, pathname))
        label = pathname.split('/')[0]
    return video_files, label_files

class DataGenerator(Sequence):
    def __init__(self, data_name, video_path, file_path, 
                 name_path, mode, batch_size, num_classes, 
                 shuffle, short_side=[256, 320], crop_size=224, 
                 clip_len=64, n_samples_for_each_video=1):
        self.batch_size = batch_size
        self.num_classes = num_classes
        self.shuffle = shuffle
        if data_name == 'ucf101':
            self.video_files, self.label_files = get_ucf101(video_path, file_path, name_path, mode, num_classes)
        elif data_name == 'ntu':
            self.video_files, self.label_files = get_ntu(video_path, file_path, name_path, mode, num_classes) 
        if mode == 'train':
            self.spatial_transforms = Compose([
            self.temporal_transforms = TemporalRandomCrop(clip_len)
        elif mode == 'val':
            self.spatial_transforms = Compose([
            self.temporal_transforms = TemporalCenterCrop(clip_len)
            raise ValueError('mode must be train or val')

        self.dataset = self.makedataset(n_samples_for_each_video, clip_len)
        print('Dataset loading Successful!!!')
        if self.shuffle:

    def __len__(self):
        return math.ceil(len(self.video_files)/self.batch_size)

    def __getitem__(self, index):
        batch_dataset = self.dataset[index*self.batch_size:(index+1)*self.batch_size]
        video_data, label_data = self.data_generator(batch_dataset)
        return video_data, label_data

    def on_epoch_end(self):
        if self.shuffle:

    def makedataset(self, n_samples_for_each_video, clip_len):
        dataset = []
        for i, video_file in enumerate(self.video_files):
            if i % 1000 == 0:
                print('dataset loading [{}/{}]'.format(i, len(self.video_files)))

            if not os.path.exists(video_file):
                print('{} is not exist'.format(video_file))

            n_frame_path = os.path.join(video_file, 'n_frames')
            n_frames = int(load_value_file(n_frame_path))

            if n_frames<=0:

            sample = {
            if n_samples_for_each_video == 1:
                sample['frame_indices'] = list(range(1, n_frames+1))

                if n_samples_for_each_video > 1:
                    step = max(1, math.ceil((n_frames - 1 - clip_len) / (n_samples_for_each_video - 1)))
                    step = clip_len
                for j in range(1, n_frames, step):
                    sample_j = copy.deepcopy(sample)
                    sample_j['frame_indices'] = list(range(j, min(n_frames + 1, j + clip_len)))

        return dataset

    def data_generator(self, batch_dataset):
        video_data = []
        label_data = []

        for data in batch_dataset:
            path = data['video_path']
            frame_indices = data['frame_indices']

            if self.temporal_transforms is not None:
                frame_indices = self.temporal_transforms(frame_indices)

            clip = load_clip_video(path, frame_indices)

            if self.spatial_transforms is not None:
                clip = [self.spatial_transforms(img) for img in clip]

            clip = np.stack(clip, 0)

        video_data = np.array(video_data)
        label_data = np.eye(self.num_classes)[label_data]
        return video_data, label_data

Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: Changing data label in power bi

Germán Rogers Tirado:

I'm here because I have not find the way to change the display units the way I wanted.

For example, for Millions the system shows M. But I want to show MM instead.

How can I do that?


Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots

SOLVED: A Prolog rule that returns the subgoals in a proof search


I am looking for a simple/straightforward way to write a rule that outputs the antecedents in a proof search (successes of subgoals). Suppose i have the code

happy(X):-rich(X), healthy(X).

I would like a rule antecedents(L, happy(john)), which returns

L = [
[rich(john), healthy(john)],
[winsLottery(john), healthy(john)]

I know about trace/0 but i am looking for a rule. I also tried clause/2 but this just gets the clause where the target event occurs and not any previous antecedents.

My motive is that I am interested in constructing a system that provides explanations for events. I know that i could do causes([rich(X), healthy(X)], happy(X)) in the knowledge base, but i am looking for clean and simple Prolog code that i can translate to classic first order logic (where lists are a bit problematic).


Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots