QuestionId
stringlengths 8
8
| AnswerId
stringlengths 8
8
| QuestionBody
stringlengths 91
22.3k
| QuestionTitle
stringlengths 17
149
| AnswerBody
stringlengths 48
20.9k
|
---|---|---|---|---|
76382104 | 76383359 | I am building a calendar app and mostly in my code I use LocalDate to set my RecyclerView and some more functions.
I have a tablet that it's android version is 7.1.1, so it cant run anything in my code related to LocalDate.
I want to contest my self and make my app run to min SDK 16. Are there any alternatives for LocalDate for my code? I also use Calendar in my code but for my RecyclerView is almost the triple code to do same task. Any other suggestion would be apreciated.
| android app LocalDate alternative for older API level | If you want to have Java 8+ features available in lower Android version, you can use one of the following options:
import the ThreeTen Android Backport (ABP)
make use of Android API Desugaring
With (one of) these two, you can use nearly all of the classes and functions of java.time in Android API Levels < 26, including LocalDate.
|
76383806 | 76384014 | When running SAM build, I get a dependency error on the module that depends on another local module within my project. From mvn or IntelliJ I have no problems, but when i execute SAM build, i got an error of notFound symbols and classses.
Build Failed
Error: JavaMavenWorkflow:MavenBuild - Maven Failed: [INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] backend [pom]
[INFO] Api [jar]
[INFO] Register [jar]
[INFO]
[INFO] ------------------------< com.backend:backend >-------------------------
[INFO] Building backend 0.0.1-SNAPSHOT [1/3]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.2.0:clean (default-clean) @ backend ---
[INFO]
[INFO] --- spring-boot-maven-plugin:3.0.6:repackage (repackage) @ backend ---
[INFO]
[INFO] --- maven-install-plugin:3.0.1:install (default-install) @ backend ---
[INFO] Installing /tmp/tmptflbmqgd/pom.xml to /home/laingard/.m2/repository/com/backend/backend/0.0.1-SNAPSHOT/backend-0.0.1-SNAPSHOT.pom
[INFO]
[INFO] --------------------------< com.backend:Api >---------------------------
[INFO] Building Api 0.0.1-SNAPSHOT [2/3]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.2.0:clean (default-clean) @ Api ---
[INFO] Deleting /tmp/tmptflbmqgd/Api/target
[INFO]
[INFO] --- maven-resources-plugin:3.3.1:resources (default-resources) @ Api ---
[INFO] Copying 0 resource from src/main/resources to target/classes
[INFO] skip non existing resourceDirectory /tmp/tmptflbmqgd/Api/config
[INFO]
[INFO] --- maven-compiler-plugin:3.10.1:compile (default-compile) @ Api ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 93 source files to /tmp/tmptflbmqgd/Api/target/classes
[INFO]
[INFO] --- maven-resources-plugin:3.3.1:testResources (default-testResources) @ Api ---
[INFO] skip non existing resourceDirectory /tmp/tmptflbmqgd/Api/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.10.1:testCompile (default-testCompile) @ Api ---
[INFO] Changes detected - recompiling the module!
[INFO]
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ Api ---
[INFO] Tests are skipped.
[INFO]
[INFO] --- maven-jar-plugin:3.3.0:jar (default-jar) @ Api ---
[INFO] Building jar: /tmp/tmptflbmqgd/Api/target/Api-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:3.0.6:repackage (repackage) @ Api ---
[INFO] Replacing main artifact with repackaged archive
[INFO]
[INFO] --- maven-install-plugin:3.0.1:install (default-install) @ Api ---
[INFO] Installing /tmp/tmptflbmqgd/Api/pom.xml to /home/laingard/.m2/repository/com/backend/Api/0.0.1-SNAPSHOT/Api-0.0.1-SNAPSHOT.pom
[INFO] Installing /tmp/tmptflbmqgd/Api/target/Api-0.0.1-SNAPSHOT.jar to /home/laingard/.m2/repository/com/backend/Api/0.0.1-SNAPSHOT/Api-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] ------------------------< com.backend:Register >------------------------
[INFO] Building Register 0.0.1-SNAPSHOT [3/3]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.2.0:clean (default-clean) @ Register ---
[INFO] Deleting /tmp/tmptflbmqgd/Register/target
[INFO]
[INFO] --- maven-resources-plugin:3.3.1:resources (default-resources) @ Register ---
[INFO] Copying 0 resource from src/main/resources to target/classes
[INFO] skip non existing resourceDirectory /tmp/tmptflbmqgd/Register/config
[INFO]
[INFO] --- maven-compiler-plugin:3.10.1:compile (default-compile) @ Register ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to /tmp/tmptflbmqgd/Register/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[4,37] package com.careerwatch.Api.exception does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[5,38] package com.careerwatch.Api.repository does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[18,19] cannot find symbol
symbol: class UserRepository
location: class com.careerwatch.register.service.RegisterServiceImpl
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[4,38] package com.careerwatch.Api.repository does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/mapper/RegisterDtoMapper.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[15,1] cannot find symbol
symbol: class UserRepository
location: class com.careerwatch.register.service.RegisterServiceImpl
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/mapper/RegisterDtoMapper.java:[13,12] cannot find symbol
symbol: class User
location: class com.careerwatch.register.mapper.RegisterDtoMapper
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[26,5] cannot find symbol
symbol: class UserRepository
location: class com.careerwatch.register.jwt.JwtService
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[34,33] cannot find symbol
symbol: class User
location: class com.careerwatch.register.jwt.JwtService
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[66,47] cannot find symbol
symbol: class User
location: class com.careerwatch.register.jwt.JwtService
[INFO] 12 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for backend 0.0.1-SNAPSHOT:
[INFO]
[INFO] backend ............................................ SUCCESS [ 0.565 s]
[INFO] Api ................................................ SUCCESS [ 3.314 s]
[INFO] Register ........................................... FAILURE [ 0.534 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.575 s
[INFO] Finished at: 2023-06-01T12:51:53-03:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) on project Register: Compilation failure: Compilation failure:
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[4,37] package com.careerwatch.Api.exception does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[5,38] package com.careerwatch.Api.repository does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[18,19] cannot find symbol
[ERROR] symbol: class UserRepository
[ERROR] location: class com.careerwatch.register.service.RegisterServiceImpl
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[4,38] package com.careerwatch.Api.repository does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/mapper/RegisterDtoMapper.java:[3,34] package com.careerwatch.Api.entity does not exist
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/service/RegisterServiceImpl.java:[15,1] cannot find symbol
[ERROR] symbol: class UserRepository
[ERROR] location: class com.careerwatch.register.service.RegisterServiceImpl
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/mapper/RegisterDtoMapper.java:[13,12] cannot find symbol
[ERROR] symbol: class User
[ERROR] location: class com.careerwatch.register.mapper.RegisterDtoMapper
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[26,5] cannot find symbol
[ERROR] symbol: class UserRepository
[ERROR] location: class com.careerwatch.register.jwt.JwtService
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[34,33] cannot find symbol
[ERROR] symbol: class User
[ERROR] location: class com.careerwatch.register.jwt.JwtService
[ERROR] /tmp/tmptflbmqgd/Register/src/main/java/com/careerwatch/register/jwt/JwtService.java:[66,47] cannot find symbol
[ERROR] symbol: class User
[ERROR] location: class com.careerwatch.register.jwt.JwtService
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :Register
This is my template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 30
Resources:
CareerWatchFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: com.careerwatch.Api.StreamLambdaHandler::handleRequest
Runtime: java17
AutoPublishAlias: production
SnapStart:
ApplyOn: PublishedVersions
Architectures:
- x86_64
MemorySize: 1024
Environment:
Variables:
POWERTOOLS_SERVICE_NAME: CareerWatchApi
DB_HOST: !Ref DBhost
DB_PORT: !Ref DBport
DB_NAME: !Ref DBname
DB_USERNAME: !Ref DBusername
DB_PASSWORD: !Ref DBpassword
Events:
HelloWorld:
Type: Api
Properties:
Path: /{proxy+}
Method: ANY
RegisterFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: com.careerwatch.register.RegisterLambdaHandler::handleRequest
Runtime: java17
AutoPublishAlias: production
SnapStart:
ApplyOn: PublishedVersions
Architectures:
- x86_64
MemorySize: 1024
Environment:
Variables:
POWERTOOLS_SERVICE_NAME: CareerWatchApi
DB_HOST: !Ref DBhost
DB_PORT: !Ref DBport
DB_NAME: !Ref DBname
DB_USERNAME: !Ref DBusername
DB_PASSWORD: !Ref DBpassword
SECRET_KEY: !Ref SecretKey
Events:
RegisterEndpoint:
Type: Api
Properties:
Path: /api/v1/register
Method: POST
Parameters:
DBhost:
Type: String
Default: ''
Description: Enter the DB host name or IP address
DBport:
Type: String
Default: ''
Description: Enter the DB port
DBname:
Type: String
Default: ''
Description: Enter the DB name
DBusername:
Type: String
Default: ''
Description: Enter the DB username
DBpassword:
Type: String
Default: ''
Description: Enter the DB password
SecretKey:
Type: String
Default: ''
Description: Enter the secret jwt key password
I did mvn install and I checked the whole project structure, however with maven it works, but with sam build it doesn't.
| I cant build my spring-boot multi module project with SAM | You stated in your comment:
"I am building a serverless application, deploying lambdas functions in api gateway."
If you are intereted in builidng a serverless app with Java, look at the PAM example. This example builds a complete serverless example that uses API Gateway, Lambda functions, Java SDK, a client app that uses Cognito to log in users, etc.
Here is the overview illustration:
As well, this example uses the AWS CDK to standup various resources.
This does not use SAM.
See:
Create a photo asset management application that lets users manage photos using labels
|
76384235 | 76385656 | I have a class in Kotlin (Jetpack compose) with a variable title and an exoplayer that updates the title.
class Player{
var title by mutableStatOf("value")
....
title= "new value"
...
}
@Composable
fun Display(){
val player = Player()
player.title?.let{
Text(it)
}
}
In the user interface an instance of the class is created and the title displayed but it remains unchanged after being updated in the class. Can someone help?
| Variable not updated in compose | You forgot to remember the player instance. Here, when you change title, your composable is recomposed because it reads the title. But when it is recomposed, you create new instance of Player with the default title.
val player = remember { Player() }
|
76384397 | 76385670 | Greetings I have problem. I am using Visual studio 2022 and created two projects there for one solution. One for back-end (ASP.NET) and the second one for fron-end (vuejs and vite). So here starts the problem. I used npm create vue@3 command to create vue project. And its launched fine , but when I did same thing in folder of front-end in my sln project vite throws error what it can not find index.html file
Error: Failed to scan for dependencies from entries:
D:/Projects/C#/DAINIS/vueapp/index.html
X [ERROR] No loader is configured for ".html" files: index.html
<stdin>:1:7:
1 │ import "D:/Projects/C#/DAINIS/vueapp/index.html"
╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
at failureErrorWithLog (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1638:15)
at D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1050:25
at runOnEndCallbacks (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1473:45)
at buildResponseToResult (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1048:7)
at D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1060:9
at new Promise (<anonymous>)
at requestCallbacks.on-end (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:1059:54)
at handleRequest (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:725:19)
at handleIncomingPacket (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:747:7)
at Socket.readFromStdout (D:\Projects\C#\DAINIS\vueapp\node_modules\esbuild\lib\main.js:675:7)
Just in case I did not changed project it is as is.
Project structure
Error dump example
I tried solutions found here and here in stack overflow but still no luck
| No loader is configured for ".html" files: index.html Vitejs | The issue is the # symbol in your file path
D:/Projects/C#/DAINIS/vueapp/
I don't know the technical reason why this causes it to fail, but if you remove it, the project should run.
|
76382075 | 76383557 | R 4.2.1
package usage: netmeta 2.8-1
Issue:
In netmeta package, the function forest() will create a forest plot. One need to know that forest() can generate a forest plot presenting the network inconsistency between direct and indirect comparisons.
In my case, I have an extremely large network, which will lead to more than 2000 comparisons. The forest plot describing network inconsistency will be VERY long in height.
The following script can only create "part" of the forest because the forest is too long.
pdf("filename.pdf",width = 12, height = 200)
forest(out, show = "all")
dev.off()
I tried to increase the height to make all the content in this document. But any value above 200 in height will generate a "blank" pdf (but I think the text should be there because the file size is 240KB rather than a NULL file which only takes 4KB in my disk).
I have no idea how to make the text visible in pdf and keep all content in this file at the same time. Any suggestion will be appreciated. Thank you.
| Saving an extremely long figure as pdf in R will cause all text invisible | PDF 1.x has an implementation limit for the page size:
The minimum page size should be 3 by 3 units in default user space; the maximum should be 14,400 by 14,400 units. In versions of PDF earlier than 1.6, the size of the default user space unit was fixed at 1⁄72 inch, yielding a minimum of approximately 0.04 by 0.04 inch and a maximum of 200 by 200 inches. Beginning with PDF 1.6, the size of the unit may be set on a page-by-page basis; the default remains at 1/72 inch.
(ISO 32000-1, Annex C.2 Architectural limits)
If R doesn't adjust the default user space unit size in case of large width or height arguments, it generates PDFs that go beyond these limits whenever you set either argument to a value greater than 200.
As those limits originate from the Adobe implementation of the PDF spec, i.e. Adobe Acrobat, tests with Adobe software may indeed show issues...
|
76382178 | 76383650 | I have a table to entries where I am trying to concatenate few columns and the concatenation should not happen when any one of the required values are empty
Here is the spreadsheet. https://docs.google.com/spreadsheets/d/1lQUG4TmFTKghV8r6Gg3EilLx6zyuTkrNfnCatRVWp_U/edit#gid=189998773
I did try to use the formula =MAP(scan(,D5:D,I5:I,lambda(a,b,d,if(or(a="",b="",d="",and(e="",f="")),,if(f<>"",f,e)&" "&a&" "&b&" "&c))),D5:D,H5:H,I5:I,J5:J,K5:K,lambda(a,b,c,e,f,if(or(a="",b="",c="",and(e="",f="")),,if(f<>"",f,e)&" "&a&" "&b&" "&c)))
But not able to get the the desired output.
Please help!
| Complex Cocatenation in gsheets | You can simplify your formula by omitting SCAN
=MAP(D5:D,H5:H,I5:I,J5:J,K5:K,
LAMBDA(dd,hh,ii,jj,kk,
IF(OR(dd="",hh="",ii="",AND(jj="",kk="")),,dd&"-"&hh&"-"&ii&"-"&if(kk<>"",kk,jj))))
(for future reference: try naming your LAMBDAs accordingly)
|
76385291 | 76385673 | I am trying to use glm in R using a dataframe containing ~ 1000 columns, where I want to select a specific independent variable and run as a loop for each of the 1000 columns representing the dependent variables.
As a test, the glm equation works perfectly fine when I specify a single column using df$col1 for both my dependent and independent variables.
I can't seem to correctly subset a range of columns (below) and I keep getting this error, no matter how many ways I try to format the df:
'data' must be a data.frame, environment, or list
What I tried:
df = my df
cols <- df[, 20:1112]
for (i in cols{
glm <- glm(df$col1 ~ ., data=df, family=gaussian)
}
| Using glm in R for linear regression on a large dataframe - issues with column subsetting | It would be more idiomatic to do:
predvars <- names(df)[20:1112]
glm_list <- list() ## presumably you want to save the results??
for (pv in predvars) {
glm_list[[pv]] <- glm(reformulate(pv, response = "col1"),
data=df, family=gaussian)
}
In fact, if you really just want to do a Gaussian GLM then it will be slightly faster to use
lm(reformulate(pv, response = "col1"), data = df)
in the loop instead.
If you want to get fancy:
formlist <- lapply(predvars, reformulate, response = "col1")
lm_list <- lapply(formlist, lm, data = df)
names(lm_list) <- predvars
|
76381069 | 76383663 | When I try to add a custom domain to Firebase Hosting, I encounter this error. Nothing happens when I click Continue.
Upon clicking continue, it loads for a few seconds, and then the same screen persists.
When I checked the console, HTTP requests are responded with 503 Service Unable.
| Firebase Hosting Can't Add Domain | firebaser here
While we did make some changes to our custom domain provisioning this week, it seems that you're hitting another problem.
Is any part of this on a Google Workspace account by any chance? If that is the case, your domain may not allow the search console. You'll want to reach out to their GSuite organization admin/owner to enable the Search Console. See Turn Google Search Console on or off for users for complete instructions.
|
76383791 | 76384034 | Wonder if anyone can help.
On the attached on tab 2 I'm trying to do a vlookup to tab 1 to use column 1 'code' to bring back column 3 'email'. But if there is more than 1 match in column 1 for 'code', prioritize the row that has Level 1 in column 2 in tab 1. For example, there are two code 20, on tab 1 but the formula should bring back row 5 - test4@live.co.uk rather than row 2 - test1@live.co.uk as row 5 has Level 1 in column 2. If there is no duplicate code in column 1 just bring back the first result in column 1.
I've got a formula already in column 3 in tab 2 but it isn't working as it doesn't bring back anything if there isn't level 1 in column 2, see column 4. On tab 3, I've got an example of what the right result should be. This can be either a vlookup, a match & index or what ever works as long as its in google sheets.
Addition to the above, it needs to be an array based on something being in column 1 and have an iferror, just in case. Thanks.
Link to sheet
| Google Sheets vlookup to prioritize a column | You may try:
=index(ifna(vlookup(A2:A,sort(LookTable!A:C,2,),3,)))
the above code prioritizes Level 1 over blank level. Not sure if there's goin' to be 10s of levels and I have to choose say level 15 over level 8 or so; then use this alternate variant
=index(ifna(vlookup(A2:A,sort(LookTable!A2:C,--ifna(regexextract(LookTable!B2:B,"\d+")),),3,)))
|
76384189 | 76385755 | Overview:
Pandas dataframe with a tuple index and corresponding 'Num' column:
Index Num
('Total', 'A') 23
('Total', 'A', 'Pandas') 3
('Total', 'A', 'Row') 7
('Total', 'A', 'Tuple') 13
('Total', 'B') 35
('Total', 'B', 'Rows') 12
('Total', 'B', 'Two') 23
('Total', 'C') 54
('Total', 'C', 'Row') 54
Total 112
The index and 'Num' column are already sorted with a lambda function by Alphabetical Order and based on the length of tuple elements:
dataTable = dataTable.reindex(sorted(dataTable.index, key=lambda x: (not isinstance(x, tuple), x)))
Problem:
Now, I want to sort only the 3rd tuple index element based on it's corresponding 'Num' value. Here would be an updated example of the dataframe:
Index Num
('Total', 'A') 23
('Total', 'A', 'Tuple') 13
('Total', 'A', 'Row') 7
('Total', 'A', 'Pandas') 3
('Total', 'B') 35
('Total', 'B', 'Two') 23
('Total', 'B', 'Rows') 12
('Total', 'C') 54
('Total', 'C', 'Row') 54
Total 112
Question:
What Lambda function can achieve this?
| Sort Rows Based on Tuple Index | You can try:
def fn(x):
vals = x.sort_values(by='Num', ascending=False)
df.loc[x.index] = vals.values
m = df['Index'].apply(len).eq(3)
df[m].groupby(df.loc[m, 'Index'].str[1], group_keys=False).apply(fn)
print(df)
Prints:
Index Num
0 (Total, A) 23
1 (Total, A, Tuple) 13
2 (Total, A, Row) 7
3 (Total, A, Pandas) 3
4 (Total, B) 35
5 (Total, B, Two) 23
6 (Total, B, Rows) 12
7 (Total, C) 54
8 (Total, C, Row) 54
9 Total 112
Initial df:
Index Num
0 (Total, A) 23
1 (Total, A, Pandas) 3
2 (Total, A, Row) 7
3 (Total, A, Tuple) 13
4 (Total, B) 35
5 (Total, B, Rows) 12
6 (Total, B, Two) 23
7 (Total, C) 54
8 (Total, C, Row) 54
9 Total 112
|
76382968 | 76384045 | I am getting the below error pointing to 'RouteChildrenProps':
I am just trying to get through a tutorial but I got stuck here. Here is the full code:
import React from 'react';
import { Route, RouteChildrenProps, Routes } from 'react-router';
import routes from './config/routes';
export interface IApplicationProps {}
const Application: React.FunctionComponent<IApplicationProps> = (props) => {
return (
<Routes>
{routes.map((route, index) => {
return <Route
key={index}
exact={route.exact}
path={route.path}
render={(routeProps: RouteChildrenProps<any>) =>
<route.component {...routeProps} />} />;
}
)}
</Routes>
);
};
export default Application;
The tutorial itself did not encounter this issue, so I'm hoping someone here can me solve this.
| RouteChildrenProps is not an exported member | React-Router v6 removed route props, these are a v4/5 export. The Route component API changed significantly from v4/5 to v6. There are no route props, no exact prop since routes are now always exactly matched, and all routed content is rendered on a single element prop taking a React.ReactNode, e.g. JSX, value.
import React from 'react';
import { Route, Routes } from 'react-router';
import routes from './config/routes';
export interface IApplicationProps {}
const Application: React.FunctionComponent<IApplicationProps> = (props) => {
return (
<Routes>
{routes.map((route) => {
const Component = route.component;
return (
<Route
key={route.path}
path={route.path}
element={<Component />}
/>
);
})}
</Routes>
);
};
If any of the routed component need to access what was previously passed via props, they should use the provided React hooks: useNavigate for navigate function that replaced useHistory, useParams for route path parameters, useLocation for the location object, etc.
|
76381810 | 76383698 | I have a use case where C wrapper is loading the Rust DLL. Both C and Rust have infinate loop.
C code
#include "main.h"
#include <stdio.h>
#include "addition.h"
#include <time.h>
#include <unistd.h>
extern void spawn_thread_and_get_back(); // Rust function :
extern void keep_calling_rust_fn(); // Rust function :
int main()
{
spawn_thread_and_get_back(); // This should spawn thread with infinite loop in rust
int sleep_seconds = 7;
while(1){
printf("Calling rust function);
keep_calling_rust_fn(); // Call rust function
}
}
and here is the rust lib code
async fn counter() {
loop {
println!("I am getting called by Tokio every 2 seconds");
// Sleep for 1 second
sleep(Duration::from_secs(2)).await;
}
}
#[tokio::main]
async fn runForever() {
let counterTask = tokio::spawn(
counter()
);
tokio::try_join!(counterTask);
}
use std::thread;
#[no_mangle]
pub unsafe extern "C" fn spawn_thread_and_get_back() {
let handle = thread::spawn(move || {
// some work here
println!("Trying to create new thread for Tokio runtime");
runForever();
});
handle.join();
}
#[no_mangle]
pub unsafe extern "C" fn keep_calling_rust_fn() {
println!("I am getting called by C wrapper every 7 second");
someRandomPrintTask();
}
async fn printTask(task_number: u32) {
println!("Print task {} -", task_number);
}
async fn someRandomPrintTask() {
let printTask = tokio::spawn(
printTask(10)
);
tokio::try_join!(printTask);
}
The issue I am facing is once I call the spawn_thread_and_get_back() from C and never get the thread back to execute while loop in C
I would like to call the rust DLL from C and spawn seperate thread for rust. And the idea is the caller thread from C will get free once it initializes the rust forever loop thread.
| Call Rust DLL function to spawn new thread from C wrapper and return the main thread back to C | Thanks everyone, After discussing in the comments above, here is the working answer. We just need to change Rust code
async fn counter() {
loop {
println!("I am getting called by Tokio every 2 seconds");
// Sleep for 1 second
sleep(Duration::from_secs(2)).await;
}
}
#[tokio::main]
async fn runForever() {
let counterTask = tokio::spawn(
counter()
);
tokio::try_join!(counterTask);
}
use std::thread;
#[no_mangle]
pub unsafe extern "C" fn spawn_thread_and_get_back() {
let handle = thread::spawn(move || {
// some work here
println!("Trying to create new thread for Tokio runtime");
runForever();
});
`handle.join();` *** Update - Removed this as it was blocking the main thread as it was waiting for handle to finish ***
}
#[no_mangle]
pub unsafe extern "C" fn keep_calling_rust_fn() {
println!("I am getting called by C wrapper every 7 second");
someRandomPrintTask();
}
async fn printTask(task_number: u32) {
println!("Print task {} -", task_number);
}
#[tokio::main] *** Update - Added the decorater here ***
async fn someRandomPrintTask() {
let printTask = tokio::spawn(
printTask(10)
);
tokio::try_join!(printTask);
}
|
76383253 | 76384048 | I add two numbers in Xcos and would like to show the result in the diagram. I managed to do so using a CSCOPE element and adding an extra CLOCK_c element:
However, I would prefer a display element that simply shows the number:
=> What component could I use for that?
If there is no existing display component for plain numbers, how can I create one?
Related questions:
How to show results of a static model in Modeling view with OpenModelica?
https://softwarerecs.stackexchange.com/questions/87166/python-framework-for-block-simulations-with-graphical-user-interface-like-openm
xcos example file:
xcos_demo.xcos
<?xml version="1.0" ?>
<XcosDiagram debugLevel="0" finalIntegrationTime="30.0" integratorAbsoluteTolerance="1.0E-6" integratorRelativeTolerance="1.0E-6" toleranceOnTime="1.0E-10" maxIntegrationTimeInterval="100001.0" maximumStepSize="0.0" realTimeScaling="0.0" solver="1.0" background="-1" gridEnabled="1" title="Untitled"><!--Xcos - 2.0 - scilab-2023.1.0 - 20230523 0919-->
<Array as="context" scilabClass="String[]"></Array>
<mxGraphModel as="model">
<root>
<mxCell id="0:1:0"/>
<mxCell id="0:2:0" parent="0:1:0"/>
<BasicBlock id="7ca5d227:1887764bffb:-7ff9" parent="0:2:0" interfaceFunctionName="CONST_m" blockType="d" dependsOnU="0" dependsOnT="0" simulationFunctionName="cstblk4_m" simulationFunctionType="C_OR_FORTRAN" style="CONST_m">
<ScilabString as="exprs" height="1" width="1">
<data line="0" column="0" value="1"/>
</ScilabString>
<ScilabDouble as="realParameters" height="0" width="0"/>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList">
<ScilabDouble height="1" width="1">
<data line="0" column="0" realPart="1.0"/>
</ScilabDouble>
</Array>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="170.0" y="270.0" width="40.0" height="40.0"/>
</BasicBlock>
<ExplicitOutputPort id="7ca5d227:1887764bffb:-7ff8" parent="7ca5d227:1887764bffb:-7ff9" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="ExplicitOutputPort;align=right;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<BigSom id="7ca5d227:1887764bffb:-7ff1" parent="0:2:0" interfaceFunctionName="BIGSOM_f" blockType="c" dependsOnU="1" dependsOnT="0" simulationFunctionName="sum" simulationFunctionType="TYPE_2" style="BIGSOM_f">
<ScilabString as="exprs" height="1" width="1">
<data line="0" column="0" value="[1;1]"/>
</ScilabString>
<ScilabDouble as="realParameters" height="1" width="2">
<data line="0" column="0" realPart="1.0"/>
<data line="0" column="1" realPart="1.0"/>
</ScilabDouble>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="430.0" y="310.0" width="40.0" height="60.0"/>
</BigSom>
<ExplicitInputPort id="7ca5d227:1887764bffb:-7ff0" parent="7ca5d227:1887764bffb:-7ff1" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="-1" initialState="0.0" style="ExplicitInputPort;align=left;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<ExplicitInputPort id="7ca5d227:1887764bffb:-7fef" parent="7ca5d227:1887764bffb:-7ff1" ordering="2" dataType="REAL_MATRIX" dataColumns="1" dataLines="-1" initialState="0.0" style="ExplicitInputPort;align=left;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<ExplicitOutputPort id="7ca5d227:1887764bffb:-7fee" parent="7ca5d227:1887764bffb:-7ff1" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="-1" initialState="0.0" style="ExplicitOutputPort;align=right;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<BasicBlock id="7ca5d227:1887764bffb:-7fec" parent="0:2:0" interfaceFunctionName="CONST_m" blockType="d" dependsOnU="0" dependsOnT="0" simulationFunctionName="cstblk4_m" simulationFunctionType="C_OR_FORTRAN" style="CONST_m">
<ScilabString as="exprs" height="1" width="1">
<data line="0" column="0" value="1"/>
</ScilabString>
<ScilabDouble as="realParameters" height="0" width="0"/>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList">
<ScilabDouble height="1" width="1">
<data line="0" column="0" realPart="1.0"/>
</ScilabDouble>
</Array>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="170.0" y="360.0" width="40.0" height="40.0"/>
</BasicBlock>
<ExplicitOutputPort id="7ca5d227:1887764bffb:-7feb" parent="7ca5d227:1887764bffb:-7fec" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="ExplicitOutputPort;align=right;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<BasicBlock id="7ca5d227:1887764bffb:-7fd6" parent="0:2:0" interfaceFunctionName="CSCOPE" blockType="c" dependsOnU="1" dependsOnT="0" simulationFunctionName="cscope" simulationFunctionType="C_OR_FORTRAN" style="CSCOPE;verticalLabelPosition=bottom;verticalAlign=top;spacing=2;displayedLabel=">
<ScilabString as="exprs" height="10" width="1">
<data line="0" column="0" value="1 3 5 7 9 11 13 15"/>
<data line="1" column="0" value="-1"/>
<data line="2" column="0" value="[]"/>
<data line="3" column="0" value="[600;400]"/>
<data line="4" column="0" value="-15"/>
<data line="5" column="0" value="15"/>
<data line="6" column="0" value="30"/>
<data line="7" column="0" value="20"/>
<data line="8" column="0" value="0"/>
<data line="9" column="0" value=""/>
</ScilabString>
<ScilabDouble as="realParameters" height="1" width="4">
<data line="0" column="0" realPart="0.0"/>
<data line="0" column="1" realPart="-15.0"/>
<data line="0" column="2" realPart="15.0"/>
<data line="0" column="3" realPart="30.0"/>
</ScilabDouble>
<ScilabInteger as="integerParameters" height="1" width="15" intPrecision="sci_int32">
<data line="0" column="0" value="-1"/>
<data line="0" column="1" value="1"/>
<data line="0" column="2" value="20"/>
<data line="0" column="3" value="1"/>
<data line="0" column="4" value="3"/>
<data line="0" column="5" value="5"/>
<data line="0" column="6" value="7"/>
<data line="0" column="7" value="9"/>
<data line="0" column="8" value="11"/>
<data line="0" column="9" value="13"/>
<data line="0" column="10" value="15"/>
<data line="0" column="11" value="-1"/>
<data line="0" column="12" value="-1"/>
<data line="0" column="13" value="600"/>
<data line="0" column="14" value="400"/>
</ScilabInteger>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="610.0" y="320.0" width="40.0" height="40.0"/>
</BasicBlock>
<ExplicitInputPort id="7ca5d227:1887764bffb:-7fd5" parent="7ca5d227:1887764bffb:-7fd6" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="-1" initialState="0.0" style="ExplicitInputPort;align=left;verticalAlign=middle;spacing=10.0;rotation=0" value=""/>
<ControlPort id="7ca5d227:1887764bffb:-7fd4" parent="7ca5d227:1887764bffb:-7fd6" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="ControlPort;align=center;verticalAlign=top;spacing=10.0;rotation=90" value=""/>
<BasicBlock id="7ca5d227:1887764bffb:-7fd1" parent="0:2:0" interfaceFunctionName="CLOCK_c" blockType="h" dependsOnU="0" dependsOnT="0" simulationFunctionName="csuper" simulationFunctionType="DEFAULT" style="CLOCK_c">
<ScilabDouble as="exprs" height="0" width="0"/>
<ScilabDouble as="realParameters" height="0" width="0"/>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="610.0" y="180.0" width="40.0" height="40.0"/>
<SuperBlockDiagram as="child" background="-1" gridEnabled="1" title="">
<Array as="context" scilabClass="String[]"></Array>
<mxGraphModel as="model">
<root>
<mxCell id="7ca5d227:1887764bffc:-7fd1"/>
<mxCell id="7ca5d227:1887764bffd:-7fd1" parent="7ca5d227:1887764bffc:-7fd1"/>
<EventOutBlock id="7ca5d227:1887764bffb:-7fbc" parent="7ca5d227:1887764bffd:-7fd1" interfaceFunctionName="CLKOUT_f" blockType="d" dependsOnU="0" dependsOnT="0" simulationFunctionName="output" simulationFunctionType="DEFAULT" style="">
<ScilabString as="exprs" height="1" width="1">
<data line="0" column="0" value="1"/>
</ScilabString>
<ScilabDouble as="realParameters" height="0" width="0"/>
<ScilabInteger as="integerParameters" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="1"/>
</ScilabInteger>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="399.0" y="162.0" width="20.0" height="20.0"/>
</EventOutBlock>
<ControlPort id="7ca5d227:1887764bffb:-7fbb" parent="7ca5d227:1887764bffb:-7fbc" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="" value=""/>
<BasicBlock id="7ca5d227:1887764bffb:-7fba" parent="7ca5d227:1887764bffd:-7fd1" interfaceFunctionName="EVTDLY_c" blockType="d" dependsOnU="0" dependsOnT="0" simulationFunctionName="evtdly4" simulationFunctionType="C_OR_FORTRAN" style="">
<ScilabString as="exprs" height="2" width="1">
<data line="0" column="0" value="0.1"/>
<data line="1" column="0" value="0.1"/>
</ScilabString>
<ScilabDouble as="realParameters" height="1" width="2">
<data line="0" column="0" realPart="0.1"/>
<data line="0" column="1" realPart="0.1"/>
</ScilabDouble>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="320.0" y="232.0" width="40.0" height="40.0"/>
</BasicBlock>
<ControlPort id="7ca5d227:1887764bffb:-7fb9" parent="7ca5d227:1887764bffb:-7fba" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="" value=""/>
<CommandPort id="7ca5d227:1887764bffb:-7fb8" parent="7ca5d227:1887764bffb:-7fba" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.1" style="" value=""/>
<SplitBlock id="7ca5d227:1887764bffb:-7fb7" parent="7ca5d227:1887764bffd:-7fd1" interfaceFunctionName="CLKSPLIT_f" blockType="d" dependsOnU="0" dependsOnT="0" simulationFunctionName="split" simulationFunctionType="DEFAULT" style="">
<ScilabDouble as="exprs" height="0" width="0"/>
<ScilabDouble as="realParameters" height="0" width="0"/>
<ScilabDouble as="integerParameters" height="0" width="0"/>
<Array as="objectsParameters" scilabClass="ScilabList"/>
<ScilabInteger as="nbZerosCrossing" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabInteger as="nmode" height="1" width="1" intPrecision="sci_int32">
<data line="0" column="0" value="0"/>
</ScilabInteger>
<ScilabDouble as="state" height="0" width="0"/>
<ScilabDouble as="dState" height="0" width="0"/>
<Array as="oDState" scilabClass="ScilabList"/>
<Array as="equations" scilabClass="ScilabList"/>
<mxGeometry as="geometry" x="380.71066" y="172.0" width="0.3333333333333333" height="0.3333333333333333"/>
</SplitBlock>
<ControlPort id="7ca5d227:1887764bffb:-7fb6" parent="7ca5d227:1887764bffb:-7fb7" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="0.0" style="" value=""/>
<CommandPort id="7ca5d227:1887764bffb:-7fb5" parent="7ca5d227:1887764bffb:-7fb7" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="-1.0" style="" value=""/>
<CommandPort id="7ca5d227:1887764bffb:-7fb4" parent="7ca5d227:1887764bffb:-7fb7" ordering="2" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="-1.0" style="" value=""/>
<CommandControlLink id="7ca5d227:1887764bffb:-7fb3" parent="7ca5d227:1887764bffd:-7fd1" source="7ca5d227:1887764bffb:-7fb8" target="7ca5d227:1887764bffb:-7fb6" style="" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="340.0" y="226.29"/>
<Array as="points">
<mxPoint x="340.0" y="172.0"/>
</Array>
<mxPoint as="targetPoint" x="380.71" y="172.0"/>
</mxGeometry>
</CommandControlLink>
<CommandControlLink id="7ca5d227:1887764bffb:-7fb2" parent="7ca5d227:1887764bffd:-7fd1" source="7ca5d227:1887764bffb:-7fb5" target="7ca5d227:1887764bffb:-7fbb" style="" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="380.71" y="172.0"/>
<Array as="points"></Array>
<mxPoint as="targetPoint" x="399.0" y="172.0"/>
</mxGeometry>
</CommandControlLink>
<CommandControlLink id="7ca5d227:1887764bffb:-7fb1" parent="7ca5d227:1887764bffd:-7fd1" source="7ca5d227:1887764bffb:-7fb4" target="7ca5d227:1887764bffb:-7fb9" style="" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="380.71" y="172.0"/>
<Array as="points">
<mxPoint x="380.71" y="302.0"/>
<mxPoint x="340.0" y="302.0"/>
</Array>
<mxPoint as="targetPoint" x="340.0" y="277.71"/>
</mxGeometry>
</CommandControlLink>
</root>
</mxGraphModel>
<mxCell as="defaultParent" id="7ca5d227:1887764bffd:-7fd1" parent="7ca5d227:1887764bffc:-7fd1"/>
</SuperBlockDiagram>
</BasicBlock>
<CommandPort id="7ca5d227:1887764bffb:-7fd0" parent="7ca5d227:1887764bffb:-7fd1" ordering="1" dataType="REAL_MATRIX" dataColumns="1" dataLines="1" initialState="-1.0" style="CommandPort;align=center;verticalAlign=bottom;spacing=10.0;rotation=90" value=""/>
<ExplicitLink id="7ca5d227:1887764bffb:-7fed" parent="0:2:0" source="7ca5d227:1887764bffb:-7ff8" target="7ca5d227:1887764bffb:-7ff0" style="ExplicitLink" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="44.0" y="20.0"/>
<Array as="points"></Array>
<mxPoint as="targetPoint" x="-4.0" y="20.0"/>
</mxGeometry>
</ExplicitLink>
<ExplicitLink id="7ca5d227:1887764bffb:-7fea" parent="0:2:0" source="7ca5d227:1887764bffb:-7feb" target="7ca5d227:1887764bffb:-7fef" style="ExplicitLink" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="44.0" y="20.0"/>
<Array as="points"></Array>
<mxPoint as="targetPoint" x="-4.0" y="40.0"/>
</mxGeometry>
</ExplicitLink>
<ExplicitLink id="7ca5d227:1887764bffb:-7fd2" parent="0:2:0" source="7ca5d227:1887764bffb:-7fee" target="7ca5d227:1887764bffb:-7fd5" style="ExplicitLink" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="44.0" y="30.0"/>
<Array as="points"></Array>
<mxPoint as="targetPoint" x="-4.0" y="20.0"/>
</mxGeometry>
</ExplicitLink>
<CommandControlLink id="7ca5d227:1887764bffb:-7fce" parent="0:2:0" source="7ca5d227:1887764bffb:-7fd0" target="7ca5d227:1887764bffb:-7fd4" style="CommandControlLink" value="">
<mxGeometry as="geometry">
<mxPoint as="sourcePoint" x="20.0" y="44.0"/>
<Array as="points"></Array>
<mxPoint as="targetPoint" x="20.0" y="-4.0"/>
</mxGeometry>
</CommandControlLink>
</root>
</mxGraphModel>
<mxCell as="defaultParent" id="0:2:0" parent="0:1:0"/>
</XcosDiagram>
| How to show result of static model (=plain number) in Xcos? | Use the AFFICH_m block (https://help.scilab.org/AFFICH_m). However, be warned that you still have to run the simulation to see the value:
|
76385130 | 76385764 | I am using MS Access to create a DB. I have different physical containers and I want to obtain the running sum of the liquid that has been added and taken out of the container to give the balance of the liquid inside the container.
The biggest problem is ordering liquid transactions. I have read lots of resources, and I found out one way of doing this is adding time to the dates so they can be ordered, but I am supposed to not use time, so I have decided to add a manual ordering number within a date which I am not sure is the best way of achieving this, but at least I can do the ordering.
The fields I have are:
-ContainerId
-DateTransaction
-OrderInDate
-Quantity
Quantity is "-" for withdrawal and "+" for additions, so if I can properly get a running total, I will get the balance.
I thought of adding OrderInDate as seconds to the Date to correctly order the data and I have written a query like this (qryInventoryTransactionsOrder):
SELECT ContainerId, Quantity, DateTransaction, OrderInDate,
DateAdd("s",OrderInDate,DateTransaction) AS Expr1,
DSum("Quantity","qryInventoryTransactionsOrder",
"[ContainerId]=" & [ContainerId] & " AND
[Expr1] <= #" & [Expr1] & "#") AS Balance
FROM InventoryTransactions
ORDER BY ContainerId, DateAdd("s",OrderInDate,DateTransaction);
This returns very interesting result like this :
ContainerId
DateTransaction
OrderInDate
Quantity
Expr1
Balance
1
29/05/2023
1
-50
29/05/2023 00:00:01
-50
1
31/05/2023
1
100
31/05/2023 00:00:01
50
1
31/05/2023
2
255
31/05/2023 00:00:02
305
1
01/06/2023
1
-155
01/06/2023 00:00:01
1
01/06/2023
2
-155
01/06/2023 00:00:02
1
01/06/2023
3
2500
01/06/2023 00:00:03
1
08/06/2023
1
-500
08/06/2023 00:00:01
1995
As you will see "Balance" is correct for the first 3 lines, then it returns 3 empty results, and then 1995.
What am I doing wrong here or is there a better way to achieve this result?
| Ms Access running sum with dates and order inside dates | I can't say anything about recursive queries in MS Access.It is interesting.
Try this query, where DSum counts sum from main table.
SELECT ContainerId, Quantity, DateTransaction, OrderInDate
,DateAdd("s",OrderInDate,DateTransaction) AS Expr1
,DSum("Quantity","InventoryTransactions"
,"[ContainerId]=" & [ContainerId]
& " AND Format(DateAdd(""s"",OrderInDate,DateTransaction),""yyyyMMddhhmmss"") <= "
& format(DateAdd("s",OrderInDate,DateTransaction),"yyyyMMddhhmmss") & "")
AS Balance
FROM InventoryTransactions
ORDER BY ContainerId, DateAdd("s",OrderInDate,DateTransaction);
|
76383683 | 76384081 | These are 2 models I have:
class Skill(models.Model):
name = models.CharField(max_length=100)
def __str__(self):
return self.name + " - ID: " + str(self.id)
class Experience(models.Model):
consultant = models.ForeignKey("Consultant", related_name="experience", on_delete=models.CASCADE)
project_name = models.CharField(max_length=100)
company = models.CharField(max_length=100)
company_description = models.TextField(null=True, blank=True)
from_date = models.DateField()
to_date = models.DateField()
project_description = models.CharField(max_length=100)
contribution = models.TextField()
summary = models.TextField()
is_pinned = models.BooleanField(default=False)
role = models.CharField(max_length=100, null=True)
skill = models.ForeignKey("Skill", related_name="experience", on_delete=models.CASCADE)
I want to do something that is quite common but apparently not possible out of the box with DRF: I want to have an endpoint /experience/ with a POST method where I can send a LIST of skill ids (skill field, ForeignKey). For example:
{
"project_name": "Project AVC",
"company": "XYZ Company",
"company_description": "Description of XYZ Company",
"from_date": "2022-01-01",
"to_date": "2022-12-31",
"project_description": "Description of Project ABC",
"contribution": "Contributions to Project ABC",
"summary": "Summary of Experience",
"is_pinned": false,
"role": "Consultant",
"skills_ids": [1,2,3],
"consultant": 1
}
If there are Skill records in the DB with ids 1,2,3 then it will create 3 records in the experience table (one for each skill ofc) . If there's no skill with such id, then during validation it should return an error to the user informing so.
The name of the field can be either skill , skills, skill_ids... it does not matter.
This is the ExperienceSerializer I created:
class ExperienceSerializer(serializers.ModelSerializer):
skills = serializers.PrimaryKeyRelatedField(
many=True,
queryset=Skill.objects.all(),
write_only=True
)
class Meta:
model = Experience
exclude = ['skill']
def create(self, validated_data):
skills_data = validated_data.pop('skills', [])
experience = Experience.objects.create(**validated_data)
for skill in skills_data:
experience.skill.add(skill)
return experience
but that gives me the error:
django.db.utils.IntegrityError: null value in column "skill_id" of relation "coody_portfolio_experience" violates not-null constraint
DETAIL: Failing row contains (21, BOOM, XYZ Company, 2022-01-01, 2022-12-31, Description of Project ABC, Contributions to Project ABC, Summary of Experience, 1, null, f, Consultant, Description of XYZ Company).
I also tried using serializers.ListField but it doesn't seem to be quite the serializer for this.
Tried the approach from this answer as well, so then I had my serializer like this:
class ExperienceSerializer(serializers.ModelSerializer):
skill_ids = serializers.ListField(
child=SkillSerializer(),
write_only=True
)
class Meta:
model = Experience
fields = (
'consultant',
'project_name',
'company',
'company_description',
'from_date',
'to_date',
'project_description',
'contribution',
'summary',
'is_pinned',
'role',
'skill',
'skill_ids'
)
def create(self, validated_data):
skill_ids = validated_data.pop('skill_ids')
experience = Experience.objects.create(**validated_data)
experience.set(skill_ids)
return experience
I modified the answer a bit from child = serializers.IntegerField, to child=SkillSerializer(), as it was giving me an error of child not being instantiated. Noticed also the use of ListField now as well.
And here is my payload in this version:
{
"project_name": "BOOM",
"company": "XYZ Company",
"company_description": "Description of XYZ Company",
"from_date": "2022-01-01",
"to_date": "2022-12-31",
"project_description": "Description of Project ABC",
"contribution": "Contributions to Project ABC",
"summary": "Summary of Experience",
"is_pinned": false,
"role": "Consultant",
"skill_ids": [3, 4,2,1],
"consultant": 1
}
which gives error 400:
{
"skill": [
"This field is required."
],
"skill_ids": {
"0": {
"non_field_errors": [
"Invalid data. Expected a dictionary, but got int."
]
},
"1": {
"non_field_errors": [
"Invalid data. Expected a dictionary, but got int."
]
},
"2": {
"non_field_errors": [
"Invalid data. Expected a dictionary, but got int."
]
},
"3": {
"non_field_errors": [
"Invalid data. Expected a dictionary, but got int."
]
}
}
}
Tried also this example here to no avail.
Spend some time reading this entire post explaining the issue of nested serialization, but I don't think it's quite related to my issue. All I want is a list to be sent in POST
I'm honestly going into a rabbit hole now of just trying different pieces together, but I have no idea how DRF wants me to do these stuff and their documentation is awful and lacking simple examples.
If someone could post example but also with explanations and not just the solution that would be much appreciated
| IN DRF, how to create a POST serializer where I can add multiple values of a Foreign Key field | With the current relation, if your payload contains "skills_ids": [1,2,3], then you would create three differrent instances of Experience each one containing a skill, which is NOT what you want, that is bad practice.
Instead, a many-to-many relationship is more adequate, associating multiple skills to an Experience and the other way around, thus avoiding duplicate values in your database.
Which is also the syntax that you are using at experience.skill.add(skill) that is how you would attach a Skill to an Experience using such relation. But, in reality you do not need to do anything other than letting the framework work for you!
models.py
class Skill(models.Model):
...
class Experience(models.Model):
...
skills = models.ManyToManyField(Skill)
serializers.py
class ExperienceSerializer(serializers.ModelSerializer):
class Meta:
model = Experience
fields = '__all__'
payload
{
"project_name": "Project AVC",
"company": "XYZ Company",
"company_description": "Description of XYZ Company",
"from_date": "2022-01-01",
"to_date": "2022-12-31",
"project_description": "Description of Project ABC",
"contribution": "Contributions to Project ABC",
"summary": "Summary of Experience",
"is_pinned": false,
"role": "Consultant",
"skills": [1,2,3],
"consultant": 1
}
|
76381541 | 76383769 | I have a program that should generate combinations of concatenation of all possible adjacent characters.
For example
Input = [a,b,c]
Output = [a,b,c], [ab,c], [a,bc]
Input = [a,b,c,d]
Output = [a,b,c,d], [ab,c,d], [a,bc,d], [a,b,cd], [ab,cd]
Input = [a,b,c,d,e]
Output = [a,b,c,d,e], [ab,c,d,e], [a,bc,d,e], [a,b,cd,e], [a,b,c,de], [ab,cd,e], [a,bc,de], **[ab,c,de]**
Last one missing from my program output
Basically we are only allowed to combine two adjacent characters.
I have written the program below.
public class Program
{
public static void Main(string[] args)
{
List<string> cases = new List<string> {"a b c", "a b c d", "a b c d e"};
for (int c = 0; c < cases.Count; c++)
{
var result = F(cases[c]);
Console.WriteLine(cases[c]);
result.ForEach(Console.WriteLine);
Console.WriteLine("---------------------------");
}
}
public static List<string> F(string searchTerm)
{
List<string> result = new List<string>();
var terms = searchTerm.Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries).ToList();
if (terms.Count == 1)
return new List<string> { searchTerm };
for (int x = 1; x <= 2; x++)
{
for (int i = 0; i < terms.Count - 1; i++)
{
if (x == 1)
{
int j = i;
var joinedWord = terms[j] + terms[j + 1];
result.Add(searchTerm.Replace($"{terms[j]} {terms[j + 1]}", joinedWord));
}
if (x == 2)
{
int j = i;
if (j + 3 < terms.Count)
{
var firstJoinedWord = terms[j] + terms[j + 1];
var secondJoinedWord = terms[j + 2] + terms[j + 3];
result.Add(searchTerm.Replace($"{terms[j]} {terms[j + 1]} {terms[j + 2]} {terms[j + 3]}", firstJoinedWord + " " + secondJoinedWord));
}
}
}
}
return result;
}
}
And here is the output.
I don't know if we need to use Recursion/Dynamic Programming to solve this? because there can be N number of combinations. Any help will be appreciated. Thanks.
| Generate combinations by combining adjacent characters | Here's a quick JavaScript version that you can run in your browser to verify the result -
function *mates(t) {
if (t.length == 0) return yield []
if (t.length == 1) return yield t
for (const m of mates(t.slice(1)))
yield [t[0], ...m]
for (const m of mates(t.slice(2)))
yield [mate(t[0], t[1]), ...m]
}
function mate(a,b) {
return a + b
}
for (const m of mates(["a", "b", "c", "d", "e"]))
console.log(m.join(","))
.as-console-wrapper { min-height: 100%; top: 0; }
a,b,c,d,e
a,b,c,de
a,b,cd,e
a,bc,d,e
a,bc,de
ab,c,d,e
ab,c,de
ab,cd,e
We can easily convert that to a C# program -
using System;
using System.Linq;
using System.Collections.Generic;
class Program
{
public static IEnumerable<List<string>> Mates(List<string> t)
{
if (t.Count == 0)
{
yield return new List<string> {};
yield break;
}
if (t.Count == 1)
{
yield return t;
yield break;
}
foreach (var m in Mates(t.GetRange(1, t.Count - 1)))
yield return new List<string> { t[0] }
.Concat(m)
.ToList();
foreach (var m in Mates(t.GetRange(2, t.Count - 2)))
yield return new List<string> { Mate(t[0], t[1]) }
.Concat(m)
.ToList();
}
public static string Mate(string a, string b)
{
return a + b;
}
public static void Main(string[] args)
{
var input = new List<string> { "a", "b", "c", "d", "e" };
foreach (var m in Mates(input))
Console.WriteLine(string.Join(",", m));
}
}
a,b,c,d,e
a,b,c,de
a,b,cd,e
a,bc,d,e
a,bc,de
ab,c,d,e
ab,c,de
ab,cd,e
|
76383755 | 76384101 | I'm trying to parse C style comments using FParsec. Not sure why this is failing:
My parser code:
let openComment : Parser<_,unit> = pstring "/*"
let closeComment : Parser<_,unit> = pstring "*/"
let comment = pstring "//" >>. restOfLine true
<|> openComment >>. (charsTillString "*/" true System.Int32.MaxValue) |>> Comment
//<|> openComment >>. manyCharsTill anyChar closeComment |>> Comment
let spaceComments = many ((spaces1 |>> IgnoreU) <|> comment)
let str s = spaceComments >>. pstring s .>> spaceComments
Test Harness:
let testStr = @"
// test comment
/* a block comment
*/
x // another comment
"
match run (str "x") testStr with
| Success(result, _, _) -> printfn "Success: %A" result
| Failure(errorMsg, _, _) -> assert false
()
Error messager. It is the same for both charsTillString and manyCharsTill
Error in Ln: 6 Col: 4
^
Note: The error occurred at the end of the input stream.
Could not find the string '*/'.
Comment and IgnoreU are both a discrimated type of string
| Why is my FParsec parser failing to recognize a block comment? | The problem is that the combinators in your comment parser don't have the precedence/associativity that you want. You can fix this by grouping with parens:
let comment = (pstring "//" >>. restOfLine true)
<|> (openComment >>. (charsTillString "*/" true System.Int32.MaxValue)) |>> Comment
I find that choice is often easier to read than <|> for complex parsers:
let comment =
choice [
pstring "//" >>. restOfLine true
openComment >>. (charsTillString "*/" true System.Int32.MaxValue)
] |>> Comment
|
76385372 | 76385796 | Kotlin and Java code templates in Android Studio.
I tried to create a code template for the truth library that would work like the val template works i.e,
When you want to create a variable from the result of calling a function? you just type,
functionName().val.
When you press enter a variable is created ie, val f = functionName().
How would I replicate this behaviour on the truth library such that when I type f.assert for instance, I get Truth.assertThat(f)?
| How can I create a code template in Android Studio to replicate the val template in Kotlin | This feature is named "Postfix completion"
You can find details here https://www.jetbrains.com/help/idea/settings-postfix-completion.html
I have tried now and it's allowed me to add and edit java and groovy templates but kotlin is not supported.
The other perfect feature of Idea is Live templates you can use them to implement required templates.
You can find details here https://www.jetbrains.com/help/idea/using-live-templates.html
Update: I have checked the current version of Idea, kotlin is not supported yet.
|
76381363 | 76383926 | I have a simple class that one of properties is:
public TValue Value => IsSuccess
? _value
: throw new InvalidOperationException("The value of a failure result can not be accessed.");
So it works like this, when some operation is a success assign value from this operation to property easy. But now I when I do sutch a thig:
var result = new Result (someValue) { IsSuccess = false }
var serialized = JsonConvert.SerializeObject(result); // of course result type is marked as seriazable
So here I am setting such a result to false, and next I want to serilize it, problem is that exception is not serializing and I am getting just thrown exception. What am I missing here?
| Serialize property access exception to json | I'm wondering whether there isn't a better design for what you're trying to achieve. Namely, it might be better to move the IsSuccess check logic into a method so that it doesn't get hit during serialization. But if you really decide to do it this way, you could use a JsonConverter to catch and serialize the exception for you:
public class ValueOrExceptionConverter : JsonConverter
{
public override void WriteJson(JsonWriter writer, object? value, JsonSerializer serializer)
{
if (value == null)
{
return;
}
try
{
serializer.Serialize(writer, ((dynamic)value).Value);
}
catch (Exception ex)
{
serializer.Serialize(writer, new { Value = new { ex.Message } });
}
}
public override object? ReadJson(JsonReader reader, Type objectType, object? existingValue, JsonSerializer serializer)
{
throw new NotImplementedException();
}
public override bool CanConvert(Type objectType) => true;
}
I'm assuming your Result class is generic so using this converter would look like this:
[JsonConverter(typeof(ValueOrExceptionConverter))]
public class Result<TValue>
This is all assuming that you're using Newtonsoft.Json. The result of using that converter is something like this:
{"Value":"Message":"The value of a failure result can not be accessed."}}
A few notes about the converter code:
Instead of casting to dynamic, you might be better of using an interface or abstract class if you have it. Or if you know exactly which generics you might use for TValue then perhaps you can cast even more specifically.
I've assumed that you don't want to serialize the entire exception. So if you need more than just the Message then you can add those properties like so for example: new { ex.Message, InnerMessage = ex.InnerException.Message }.
I've assumed that the Result class only has one property, named Value. You can add more properties in the same way you can add additional Exception properties.
I haven't implemented the ReadJson because deserializing the exception instead of the value makes that quite tricky. But I'm sure it's not impossible if you need to also deserialize the Result.
|
76383757 | 76384134 | I'm throwing an error in my loaders when response.status.ok of fetch is false. Error component is then loaded.
But my server sometimes returns 429 status code (too many requests) upon which I don't want to load an error component but instead simply do nothing, or maybe display some message but certainly without reloading the already loaded component, or redirecting, etc.
How can that be implemented?
https://codesandbox.io/s/hungry-hooks-yuucsp?file=/src/App.js
| React Router cancel loader | You can check the response status code specifically for a 429 status and return any defined value back to the UI. I think the important detail here that you should return something instead of undefined. null appears to work and not throw any extraneous errors
function loader() {
let response = imitateFetch();
console.log(response);
if (response.status === 429) {
console.log(429);
return null; // <-- return null
}
if (!response.ok) {
throw "loader error";
}
return {};
}
but you can return anything you like, for example, change the response status to 200.
function loader() {
let response = imitateFetch();
console.log(response);
if (response.status === 429) {
console.log(429);
response.status = 200;
response.ok = true;
return response;
}
if (!response.ok) {
throw "loader error";
}
return {};
}
It's really up to your app's, or code's, specific use case what it returns and how the UI handles it.
|
76380714 | 76384044 | Question
How can I center an ImageView and a TextView within a ConstraintLayout, ensuring that they remain centered regardless of the ConstraintLayout's height, while also ensuring that the ImageView shrinks if necessary to maintain visibility of the TextView?
Expected results:
In case of a large ConstraintLayout (centered, but not larger than the original image):
In case of a small ConstraintLayout (centered, shrinked image, text still visible):
I build the following reproducible example:
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="500dp">
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
tools:srcCompat="@tools:sample/avatars"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/textView" />
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:paddingHorizontal="@dimen/primary_margin"
android:text="Just a text"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toBottomOf="@+id/imageView"
app:layout_constraintBottom_toBottomOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
Edit
The closest solution I have found so far is as follows. However, the only remaining drawback is that I need to manually specify the maximum width and height, whereas my ideal scenario would be to dynamically adjust them according to the image's actual width and height.
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="@+id/imageView"
android:layout_width="0dp"
android:layout_height="0dp"
android:layout_marginBottom="16dp"
app:layout_constraintHeight_max="200dp"
app:layout_constraintWidth_max="200dp"
tools:srcCompat="@tools:sample/avatars"
app:layout_constraintVertical_chainStyle="packed"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/textView"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent" />
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:paddingHorizontal="@dimen/primary_margin"
android:text="Just a text"
app:layout_constraintTop_toBottomOf="@id/imageView"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintBottom_toBottomOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
| How can I center an ImageView and TextView in a ConstraintLayout while adjusting ImageView size to maintain TextView visibility? | If you take your closest solution, try setting the ImageView height and width to wrap_content again (so it displays at its native size) but add app:layout_constrainedWidth="true" and app:layout_constrainedHeight="true". This basically lets you define the height and width you want (i.e. whatever the image size actually is), but also enforces the constraints, so it acts like 0dp (i.e. MATCH_CONSTRAINT) when necessary:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16dp"
xmlns:app="http://schemas.android.com/apk/res-auto">
<ImageView
android:id="@+id/image"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constrainedWidth="true"
app:layout_constrainedHeight="true"
android:src="@mipmap/ic_launcher_round"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/text"
app:layout_constraintVertical_chainStyle="packed"
/>
<TextView
android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hi"
android:textSize="85sp"
android:layout_marginTop="16dp"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toBottomOf="@id/image"
app:layout_constraintBottom_toBottomOf="parent"
/>
</androidx.constraintlayout.widget.ConstraintLayout>
Here's the same layout in the Phone and Wear OS Square reference device templates:
The actual ImageView is distorted (because both dimensions are trying to size to wrap_content) but the image maintains its aspect ratio to fit in both dimensions (if you're using the default scaleType).
|
76385243 | 76385822 | How can I provide string to function in pytest, so that function under test treat it as content of file ?
I have function that is parsing file. File is custom text format.
I want to test multiple different input files.
Initial idea was to use pytest is this way:
import pytest
import mymodule as t
# this is all valid one line config file, starting simple
@pytest.mark.parametrize("input_file", [
"#",
"##",
"",
" ",
" ",
])
def test_is_input_file_valid(input_file):
assert t.myclass(input_file)
Problem that I have is each line need to be content of input file, because t.myclass(input_file) is expecting file not string. So, I need somehow to mock it.
I am assuming pytest have this functionality by default or plugin, but was not able to find it.
| How to mock string as content of file for pytest? | I wrote a pytest plugin called pytest_tmp_files to solve this exact problem. Here's how it would look for your example:
@pytest.mark.parametrize(
'tmp_files', [
{'f': '#'},
{'f': '##'},
{'f': ''},
{'f': ' '},
{'f': ' '},
],
indirect=['tmp_files'],
)
def test_is_input_file_valid(tmp_files):
assert t.myclass(tmp_files / 'f')
Some things worth noting:
The parameters are dictionaries where the keys are file names and the values are files contents. This is a bit more verbose that just specifying the file contents, but much more flexible.
I'm assuming that your parser accepts pathlib.Path objects, which is what tmp_files / 'f' is. If not, you might have to do some sort of conversion here, but it should be straight forward.
The specified files are actually created in a real temporary directory (unique for each test case), so you can treat them exactly like real files.
I wrote another pytest plugin called parametrize_from_file that allows you to specify test parameters in structured data files (e.g. YAML), separate from the test code. I'd recommend checking it out as well. File contents often end up being long, multi-line strings that mess up the indentation of the parameter list and make the whole test script hard to read. Moving these parameters to a separate file can really help readability.
See also this discussion on GitHub, where I tried to get this feature added to pytest proper.
|
76383483 | 76384221 | I am using Nextjs and I would like to use the html tag, input type of 'date'. I also would like to set a defaultValue and I did something like this and it is not working:
const [dDate, setdDate] = useState('17/11/2022');
const ReactComponent = ()=>{
return(
<input type="date" className=" border-2 py-1 px-2 rounded-md"
onChange={(text)=>handleDepositeDate(text.target.value, index)} defaultValue={dDate}
/>
)
};
It does not display on the screen. I also tried using value instead of defaultValue, still didn't work. I also tried other date input on the application and defaultValue didn't work.
| defaultValue for input type 'date' is not working in Nextjs | Try using a different date format with the value attribute, like so:
<input type="date" value="2022-11-17"/>
|
76381366 | 76384056 | I created a aws eks using assume role api. Role A assume role B to performe create EKS api. I create the eks and specify that the EKS's cluster role is role C. As I know,the role C's arn will be stored in eks aws-auth configMap.
When A assume role C to access the created EKS, "Failed to get namespaces: Unauthorized" returned.
I always use assume role to invoke API. Does anyone know, whether aws-auth store role C's arn like 'arn:aws:iam::C:role/k8s-cluster-role' or eks store the role arn in aws-auth in another way.
| Can't use assume role to accesss aws eks which is created by assume role api | You have some misconception; The role that is stored in aws-auth configmap for system:masters group in your cluster is not the cluster role, but the iam principal that creates the cluster itself, as per official doc.
When you create an Amazon EKS cluster, the IAM principal that creates the cluster is automatically granted system:masters permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane.
From what you have written, if the sequence is right, and that assume-role approach you are following works properly, you should be able to query your cluster api resources with role-b not role-c, since b is the one you used to create the cluster. In your current setup, you are expecting role C to be able to access cluster resources, though you created with role b.
|
76380876 | 76384084 | Adding description with --description while creating deb package on linux creates deb file, which when opened with QApt package installer (double click installation) shows defined text in description section, which seems like a bold header and underneath there is that description again with first character stripped:
Command:
jpackage -t deb \
--app-version 1.0.0 \
--icon books256.png \
--name hello \
--description 'my testing text' \
--dest target/jpackage_outputdir/ \
--temp target/jpackage_tempdir \
--input target/jpackage_inputdir \
--main-class core.Main \
--main-jar jarfile.jar \
--linux-menu-group Office \
--linux-shortcut
Is there a way to prevent this unprofessional look by finely setting bold header and text underneath separately or at least by removing this duplication in description section?
Using Kubuntu 23.0.4, JDK - openjdk 17.0.7 2023-04-18 LTS
| Description bug in DEB file created by jpackage when opened with QApt installer | A solution to this problem might be to use multi-line description. First line is then bold text and second line is below it, altough bug still remains when using single-line description.
jpackage -t deb \
--app-version 1.0.0 \
--icon books256.png \
--name hello \
--description 'first line
second line' \
--dest target/jpackage_outputdir/ \
--temp target/jpackage_tempdir \
--input target/jpackage_inputdir \
--main-class core.Main \
--main-jar jarfile.jar \
--linux-menu-group Office \
--linux-shortcut
|
76384941 | 76385914 | I've composed the component below, and I need to apply a type for the custom iconFontSize prop. How can I do this?
import { SvgIconComponent } from '@mui/icons-material'
import { Typography, TypographyProps} from '@mui/material'
type Props = TypographyProps & {
Icon: SvgIconComponent
iconFontSize: /* insert type here! */
}
export const IconTypography = ({
Icon,
iconFontSize = 'inherit',
columnGap = 1,
children,
...props
}: Props) => {
return (
<Typography display="flex" alignItems="center" columnGap={columnGap} {...props}>
<Icon fontSize={iconFontSize} />
{children}
</Typography>
)
}
Thanks in advance!
| How to type Material UI Icon's fontSize prop? | You can type iconFontSize as a union type of 'inherit' | 'large' | 'medium' | 'small':
type Props = TypographyProps & {
Icon: SvgIconComponent
iconFontSize: "inherit" | "small" | "medium" | "large"
}
If you want to use the exact type from the MUI type definition file, you can alternatively use:
import { OverridableStringUnion } from '@mui/types';
fontSize?: OverridableStringUnion<
'inherit' | 'large' | 'medium' | 'small',
SvgIconPropsSizeOverrides
>;
The OverridableStringUnion type is in turn defined as such:
export type OverridableStringUnion<T extends string | number, U = {}> = GenerateStringUnion<
Overwrite<Record<T, true>, U>
>;
See the type definition in the MUI docs here: https://mui.com/material-ui/api/svg-icon/
|
76381138 | 76384119 | I'm using three.js and I need to create a few clones of a texture made with THREE.WebGLRenderTarget().
I can use the original texture, e.g.:
scene.background = renderTarget.texture;
But if I try to use a clone of it:
const tex = renderTarget.texture.clone();
scene.background = tex;
I get the following error:
THREE.WebGLState: TypeError: Failed to execute 'texSubImage2D' on 'WebGL2RenderingContext': Overload resolution failed.
If I add the line:
tex.isRenderTargetTexture = true;
Now I don't get any error, but the texture is all black.
I have also tried to clone the render target (instead of its texture) but it didn't work either. Can you please help me?
Thank you in advance.
| Cloning a texture created with THREE.WebGLRenderTarget | Problem solved: I created a framebuffer texture, and I copied the texture by using method renderer.copyFramebufferToTexture() of class THREE.WebGLRenderer.
|
76382996 | 76384240 | given a distance, turns calculate the number of turns to be on each speed - 1 2 4, and 8. to complete the distance on the last turn.
you start on speed 1, and in each turn you can accelerate to the next speed or do nothing (1 -> 2, 2 -> 4, 4 -> 8), once you accelerate you can't slow back down.
each turn you are moving speed steps (distance -= speed).
also, it's ok to go more than distance steps but only if it happens on the last turn.
for example: distance = 25, turns = 10 -> speed 1: 1 turn, speed 2: 5 turns, speed 4: 4 turns, the total distance is 1 * 1 + 2 * 5 + 4 * 4 = 27 steps, but we got to 25 steps on the last turn which is what we need.
I need help writing a function that will calculate that.
def calc_speeds(distance: int, arrive_in_x_turns: int) -> dict[int, int]:
so far i've used this turns_till_arrival = ((turns_till_arrival - (speed // 2)) // speed) + (speed // 2) + 1 formula, in a for loop, for each speed and if turns_till_arrival is equal to turns I will accelerate until I get to speed without spending extra turns in other speeds (only the 1 necessary turn, because I can only accelerate once per turn) but then there are a lot of times that it doesn't work because in order for it to work I must spend more than 1 turn at other speeds but I can't figure out a way to calculate that.
| algorithm to calculate speeds to move in order to arrive in x turns | This is a fairly exhaustive approach, but it does provide the correct answer to the problem above about how to cover your distance in a specified number of steps.
For my method, I first created the output dictionary in the form of {1: distance, 2: 0, 4: 0, 8: 0}, where, for each key-value pair, the key represents your speed and the value represents the total number of turns spent at that speed. This dictionary above represents the way to cover your distance in the maximum number of turns possible, since each turn you are just running at speed=1.
The important realization here is that if you subtract two turns from output[1] and add a turn to output[2], you are covering the same distance as before, but you have lost a turn. Therefore, you can make a loop that stops when the number of turns in your output equals arrive_in_x_turns, and during each iteration, you lose a turn by doing the subprocess I have described above. This also works for the other speeds (i.e. subtracting two turns from output[2] and adding a turn into output[4] maintains the distance you want to travel while losing a turn).
Here is my implementation below. When running your example of calc_speeds(distance=25, arrive_in_x_turns=10), my output is {1: 1, 2: 6, 4: 3, 8: 0}. The number of turns are correct (1 + 6 + 3 = 10), as is the distance covered (1*1 + 2*6 + 4*3 + 8*0 = 1 + 12 + 12 + 0 = 25).
def calc_speeds(distance: int, arrive_in_x_turns: int) -> dict[int, int]:
output = {1: distance, 2: 0, 4: 0, 8: 0}
# Loop until the total number of turns is equal to the number of turns requested
while (sum(output.values()) > arrive_in_x_turns):
# Subtract two turns off of '1' and add a turn to '2' until not possible
# This method ensures that during each iteration, number of turns decreases
# by one while the distance traveled remains the same
if output[1] // 2 > 0:
output[1] -= 2
output[2] += 1
# Do a similar method for the ones above
elif output[2] // 2 > 0:
output[2] -= 2
output[4] += 1
elif output[4] // 2 > 0:
output[4] -= 2
output[8] += 1
return output
Again, this is an exhaustive solution that will have a long runtime with harder examples, but hopefully this will help get you started on finding a more optimal way to solve it!
EDIT:
This answer above can be optimized! Instead of subtracting two turns from a current speed and adding one turn to the next, we can instead find the minimum between half the number of turns of the current speed and the remaining number of turns that need to be gotten rid of. Here's a newer version.
def calc_speeds(distance: int, arrive_in_x_turns: int) -> dict[int, int]:
output = {1: distance, 2: 0, 4: 0, 8: 0}
# Loop until the total number of turns is equal to the number of turns requested
while sum(output.values()) > arrive_in_x_turns:
if output[1] // 2 > 0:
# Calculate turns to subtract from current speed as the minimum between half the
# current turns under this speed and the number of turns remaining
turns_to_subtract = min(output[1] // 2, sum(output.values()) - arrive_in_x_turns)
# Use similar logic to the previous version of this algorithm
output[1] -= turns_to_subtract * 2
output[2] += turns_to_subtract
elif output[2] // 2 > 0:
turns_to_subtract = min(output[2] // 2, sum(output.values()) - arrive_in_x_turns)
output[2] -= turns_to_subtract * 2
output[4] += turns_to_subtract
elif output[4] // 2 > 0:
turns_to_subtract = min(output[4] // 2, sum(output.values()) - arrive_in_x_turns)
output[4] -= turns_to_subtract * 2
output[8] += turns_to_subtract
return output
|
76385015 | 76385939 | I am writing jolt for transforming this data but not getting desired result
If practice_loc,prac_num and topId are same for two or more data then they will be combined together with separate S1 and S2 within subList. Else they would pass as it is with addition of subList only.
Data
[
{
"practice_loc": "120",
"prac_num": "oswal",
"topId": "t1",
"S1": "A1",
"S2": "B1"
},
{
"practice_loc": "120",
"prac_num": "oswal",
"topId": "t1",
"S1": "A2",
"S2": ""
},
{
"practice_loc": "334",
"prac_num": "L3",
"topId": "plumcherry",
"S1": "A3",
"S2": ""
},
{
"practice_loc": "987",
"prac_num": "L3",
"topId": "artica",
"S1": "A5",
"S2": "B7"
}
]
Expected Output:
[
{
"practice_loc": "120",
"prac_num": "oswal",
"topId": "t1"
"subList": [
{
"S1": "A1",
"S2": "B1"
},
{
"S1": "A2",
"S2": ""
}
]
},
{
"practice_loc": "334",
"prac_num": "L3",
"topId": "plumcherry"
"subList": [
{
"SubID1": "A3",
"SubID2": ""
}
]
},
{
"practice_loc": "987",
"prac_num": "L3",
"topId": "artica",
"subList": [
{
"SubID1": "A5",
"SubID2": "B7"
}
]
}
]
Here is what I tried but didnt get desired result Its not printing anything
[
{
"operation": "shift",
"spec": {
"*": {
"@": "@(1,practice_loc).@(1,prac_num).@(1,topId)"
}
}
},
{
"operation": "cardinality",
"spec": {
"*": {
"*": "MANY"
}
}
},
{
"operation": "shift",
"spec": {
"*": {
"*": {
"*": {
"practice_loc": "[#4].&",
"prac_num": "[#4].&",
"topId": "[#4].&",
"S*": "[#4].subList[&1].&"
}
}
}
}
},
{
"operation": "cardinality",
"spec": {
"*": {
"practice_loc": "ONE",
"prac_num": "ONE",
"topId": "ONE"
}
}
}
]
| Jolt not printing anything | Your current spec is pretty good. Would be suitable to rearrange it like that
[
{ // group by those three attributes
"operation": "shift",
"spec": {
"*": {
"*": "@1,practice_loc.@1,prac_num.@1,topId.&",
"S*": "@1,practice_loc.@1,prac_num.@1,topId.subList[&1].&"
}
}
},
{ // get rid of wrappers
"operation": "shift",
"spec": {
"*": {
"*": {
"*": {
"@": ""
}
}
}
}
},
{
"operation": "cardinality",
"spec": {
"*": {
"*": "ONE", // pick only single one from repeating components
"subList": "MANY"
}
}
},
{ // get rid of generated nulls within subList arrays
"operation": "modify-overwrite-beta",
"spec": {
"*": "=recursivelySquashNulls"
}
}
]
Edit for illustration : Below, I have pasted the image what I get after toggling ADVANCED tab of Configure section for the JoltTransformJSON processor which has the version 1.21.0 as NiFi does. Btw, yours is a recent version as well.
|
76381511 | 76384131 | My problem is that I have found a solution for one group of checkboxes and shows the selected data in a text field in my dynamic formular.
But I think the line $('input:checkbox').change((e) does not make sense if I want to use a different, or new, group of checkboxes.
My idea is that the two different groups of checkboxes get a unique id to handle with them.
The data i want to include in an MariaDB Database.
<tr>
<td>
Entsperrcode:
</td>
<td>
<script src="inc/jquery-3.7.0.min.js"></script>
<br>
<table border="1"cellspacing="0" cellpadding="0">
<tr>
<td><center>1</center></td>
<td><center>2</center></td>
<td><center>3</center></td>
</tr>
<tr>
<td><input type="checkbox" id="entsperrcodewisch1" value="1"></td>
<td><input type="checkbox" id="entsperrcodewisch2" value="2"></td>
<td><input type="checkbox" id="entsperrcodewisch3" value="3"></td>
</tr>
<td><center>4</center></td>
<td><center>5</center></td>
<td><center>6</center></td>
</tr>
<tr>
<td><input type="checkbox" id="entsperrcodewisch4" value="4"></td>
<td><input type="checkbox" id="entsperrcodewisch5" value="5"></td>
<td><input type="checkbox" id="entsperrcodewisch6" value="6"></td>
</tr>
<tr>
<td><center>7</center></td>
<td><center>8</center></td>
<td><center>9</center></td>
</tr>
<tr>
<td><input type="checkbox" id="entsperrcodewisch7" value="7"></td>
<td><input type="checkbox" id="entsperrcodewisch8" value="8"></td>
<td><input type="checkbox" id="entsperrcodewisch9" value="9"></td>
</tr>
</table>
<input type="text" id="selected" name="entsperrcode"/><br><br>
<script>
(function() {
$('input:checkbox').change((e) => {
if ($(e.currentTarget).is(':checked')) {
var curVal = $('#selected').val();
if (curVal) {
$('#selected').val(curVal + '-' + e.currentTarget.value);
} else {
$('#selected').val(e.currentTarget.value);
}
} else {
var curVal = $('#selected').val().split('-');
var filteredVal = curVal.filter(el => el.trim() !== e.currentTarget.value)
$('#selected').val(filteredVal.join('-'));
}
});
})();
</script>
</td>
</tr>
<tr>
<td>
Beschädigungen:
</td>
<td>
<script src="inc/jquery-3.7.0.min.js"></script>
<br>
<input type="checkbox" id="beschaedingung1" value="Display"><br>
<input type="checkbox" id="beschaedingung2" value="Rückseite"><br>
<input type="checkbox" id="beschaedingung3" value="Rand"><br>
<input type="text" id="beschaedig" name="beschaedig"/><br><br>
<script>
(function() {
$('input:checkbox').change((e) => {
if ($(e.currentTarget).is(':checked')) {
var curVal = $('#beschaedig').val();
if (curVal) {
$('#beschaedig').val(curVal + '-' + e.currentTarget.value);
} else {
$('#beschaedig').val(e.currentTarget.value);
}
} else {
var curVal = $('#beschaedig').val().split('-');
var filteredVal = curVal.filter(el => el.trim() !== e.currentTarget.value)
$('#beschaedig').val(filteredVal.join('-'));
}
});
})();
</script>
</td>
</tr>
| Different Checkbox groups should write data in textfields | The way I'd approach this is as below, with explanatory comments in the code:
// simple utility variable and functions to reduce some of the repetitive typing:
const D = document,
// here we have means of creating an element, and passing various properties
// to that new element (className, textContent, borderColor...):
create = (tag, props) => Object.assign(D.createElement(tag), props),
// an alias for document, and element, querySelector(), which is used
// depends on the context which is the document by default:
get = (selector, context = D) => context.querySelector(selector),
// as above, but an alias for querySelectorAll(), this explicitly returns
// an Array instead of a NodeList in order to allow for Array methods to be
// used (map(), filter()...):
getAll = (selector, context = D) => [...context.querySelectorAll(selector)];
// named function to handle the events on the <input type="checkbox">
// elements, this is bound later using EventTarget.addEventListener();
// this function takes one argument - 'evt', a reference to the Event Object -
// passed from EventTarget.addEventListener():
const checkboxHandler = (evt) => {
// this is the element to which the event-handling function is bound:
let changed = evt.currentTarget,
// the 'output' is the element in which I'll be showing the results,
// and uses Element.querySelector() to find the first (if any) element
// matching the selector which is found within the closest ancestor
// <fieldset> element:
output = get('.result', changed.closest('fieldset')),
// this retrieves the delimiter custom property defined in the CSS,
// whether via the stylesheet or the inline "style" attribute:
delimiter = window.getComputedStyle(output,null).getPropertyValue("--delimiter"),
// we retrieve the value of the changed element (removing leading/trailing
// white-space; this may or may not be necessary depending on your use-case):
result = changed.value.trim(),
// here we use a template-literal string to concatenate the various
// variables together to create an identifier for the created element
// (an id could be used, given that this is likely to be unique, but I
// chose to assume that conflicts may happen across the document):
resultClass = `${changed.name}${delimiter}${result}`,
// we create a <span> element with the given textContent and className:
resultWrapper = create('span', {
textContent: result,
className: resultClass,
}),
// creating another element - an <em> - to wrap the delimiter character:
delimiterWrapper = create('em', {
textContent: delimiter,
className: "delimiter"
});
// a checkbox may fire the change event both when it's checked or
// unchecked by the user; therefore we first test to see whether it
// was checked (this returns a Boolean true or false):
if (changed.checked) {
// if it was checked we append both the created <em> element, along
// with the created <span> element to the output element:
output.append(delimiterWrapper, resultWrapper);
} else {
// or if it was unchecked, we retrieve the existing element via
// the class we created created earlier and looking within the
// output element, using the alias (above) of element.querySelector()
// by passing a context:
let toRemove = get(`.${resultClass}`, output);
// we then use an Array literal, passing in both the previous element
// sibling of the element we wish to remove and the element itself:
[toRemove.previousElementSibling, toRemove]
// we then use Array.prototype.forEach() to iterate over the Array:
.forEach(
// passing in a reference to the current Array element 'el',
// and removing that element:
(el) => el.remove()
);
}
};
// here we selector all checkbox <input> elements in the document, and
// iterate over that Array:
getAll('input[type=checkbox]').forEach(
// passing in a reference - 'el' - to the current Node of the
// Array of Nodes, and using EventTargetTarget.addEventListener()
// to bind the named function checkboxHandler() (note the deliberately
// missing parentheses) as the event-handler for the 'change' event:
(el) => el.addEventListener('change', checkboxHandler)
);
/* setting a custom property to use later as a basic
demonstration of how custom properties might be
used: */
form {
--labelSize: 3rem;
}
fieldset {
/* allows us to easily style the <input> elements
into a grid, tabular-style, format: */
display: inline grid;
/* defining the space between adjacent elements: */
gap: 0.5rem;
/* defining the size of the various rows: */
grid-auto-rows: var(--labelSize);
/* defining the size of the various columns, using
repeat() to create a number of columns stored in
the --columnCount custom variable, with a default
value of 3: */
grid-template-columns: repeat(var(--columnCount, 3), var(--labelSize));
}
label {
border: 1px solid currentColor;
display: grid;
padding: 0.25rem;
text-align: center;
}
/* this moves the <input> children of the <label>
element after their next sibling element in order
that the DOM allows for a checked <input> to style
the following <span>, despite that <span> visually
appearing before the <input>: */
label > input {
order: 1;
}
/* styling the adjacent <span> of the checked <input>: */
input:checked + span {
/* this isn't particularly great example, but serves
only to demonstrate how the <span> may be styled
based on the state of the check-box: */
background-image:
radial-gradient(
at 0 0,
lightskyblue,
palegreen
);
font-weight: bold;
}
.result {
border: 1px solid currentColor;
/* using flex layout: */
display: flex;
/* shorthand for:
flex-direction: row;
flex-wrap: wrap;
adjust to your preferences: */
flex-flow: row wrap;
gap: 0.25rem;
/* positioning the element so that it
starts in grid-column: 1 (the first)
and finishes in grid-column: -1
(the last) */
grid-column: 1 / -1;
padding-block: 0.25rem;
padding-inline: 0.5rem;
}
/* within the JavaScript we add the .result
elements along with a previous .delimiter
element, so here we style a .delimiter
when it's the first child so that it's not
visible in the document: */
.result .delimiter:first-child {
display: none;
}
<form action="#">
<!-- I opted to wrap each "group" of <input> elements within
a <fieldset> element within a <form>, as your demo didn't
look tabular (the element choices aren't really important
changes require only minor adjustments to the JavaScript: -->
<fieldset>
<!-- provides a "title" to the grouped <input> elements: -->
<legend>Group 1</legend>
<!-- wrapping the <input> within the <label> so that clicking
either the <input> or the text will update the checked
state of the <input> and trigger the change event: -->
<label><input type="checkbox" value="1" name="group-1">
<span class="labelText">1</span>
<!-- to associate the <input> elements together in their
"groups" I've taken advantage of the "name"
attribute in place of the (invalid duplication of
"id" attributes) -->
</label>
<label><input type="checkbox" value="2" name="group-1">
<span class="labelText">2</span>
</label>
<label><input type="checkbox" value="3" name="group-1">
<span class="labelText">3</span>
</label>
<label><input type="checkbox" value="4" name="group-1">
<span class="labelText">4</span>
</label>
<label><input type="checkbox" value="5" name="group-1">
<span class="labelText">5</span>
</label>
<label><input type="checkbox" value="6" name="group-1">
<span class="labelText">6</span>
</label>
<label><input type="checkbox" value="7" name="group-1">
<span class="labelText">7</span>
</label>
<label><input type="checkbox" value="8" name="group-1">
<span class="labelText">8</span>
</label>
<label><input type="checkbox" value="9" name="group-1">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 2</legend>
<label><input type="checkbox" value="1" name="group-2">
<span class="labelText">1</span>
</label>
<label><input type="checkbox" value="2" name="group-2">
<span class="labelText">2</span>
</label>
<label><input type="checkbox" value="3" name="group-2">
<span class="labelText">3</span>
</label>
<label><input type="checkbox" value="4" name="group-2">
<span class="labelText">4</span>
</label>
<label><input type="checkbox" value="5" name="group-2">
<span class="labelText">5</span>
</label>
<label><input type="checkbox" value="6" name="group-2">
<span class="labelText">6</span>
</label>
<label><input type="checkbox" value="7" name="group-2">
<span class="labelText">7</span>
</label>
<label><input type="checkbox" value="8" name="group-2">
<span class="labelText">8</span>
</label>
<label><input type="checkbox" value="9" name="group-2">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 3</legend>
<label><input type="checkbox" value="1" name="group-3">
<span class="labelText">1</span>
</label>
<label><input type="checkbox" value="2" name="group-3">
<span class="labelText">2</span>
</label>
<label><input type="checkbox" value="3" name="group-3">
<span class="labelText">3</span>
</label>
<label><input type="checkbox" value="4" name="group-3">
<span class="labelText">4</span>
</label>
<label><input type="checkbox" value="5" name="group-3">
<span class="labelText">5</span>
</label>
<label><input type="checkbox" value="6" name="group-3">
<span class="labelText">6</span>
</label>
<label><input type="checkbox" value="7" name="group-3">
<span class="labelText">7</span>
</label>
<label><input type="checkbox" value="8" name="group-3">
<span class="labelText">8</span>
</label>
<label><input type="checkbox" value="9" name="group-3">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 4</legend>
<label><input type="checkbox" value="1" name="group-4">
<span class="labelText">1</span>
</label>
<label><input type="checkbox" value="2" name="group-4">
<span class="labelText">2</span>
</label>
<label><input type="checkbox" value="3" name="group-4">
<span class="labelText">3</span>
</label>
<label><input type="checkbox" value="4" name="group-4">
<span class="labelText">4</span>
</label>
<label><input type="checkbox" value="5" name="group-4">
<span class="labelText">5</span>
</label>
<label><input type="checkbox" value="6" name="group-4">
<span class="labelText">6</span>
</label>
<label><input type="checkbox" value="7" name="group-4">
<span class="labelText">7</span>
</label>
<label><input type="checkbox" value="8" name="group-4">
<span class="labelText">8</span>
</label>
<label><input type="checkbox" value="9" name="group-4">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
</form>
JS Fiddle demo.
In response to OP's comment:
... but if I change the name "group-1" to "group1[]" to build an array the code does not reset the numbers.
I've updated the name attribute to reflect those requirements, and added a custom data-* attribute to from which the selector is derived (purely for simplicity's sakes, as [ and ] would have to be escaped in order to be used as part of the selector, which is needlessly complex though still possible of course).
This leads to the following, adjusted code:
console.clear();
const D = document,
create = (tag, props) => Object.assign(D.createElement(tag), props),
get = (selector, context = D) => context.querySelector(selector),
getAll = (selector, context = D) => [...context.querySelectorAll(selector)];
const checkboxHandler = (evt) => {
let changed = evt.currentTarget,
output = get('.result', changed.closest('fieldset')),
delimiter = window.getComputedStyle(output, null).getPropertyValue("--delimiter"),
result = changed.value.trim(),
// here we use the Element.dataset API to form the resultClass selector to
// enable easy removal of the value from the <output>
resultClass = `${changed.dataset.name}${delimiter}${result}`,
resultWrapper = create('span', {
textContent: result,
className: resultClass,
}),
delimiterWrapper = create('em', {
textContent: delimiter,
className: "delimiter"
});
if (changed.checked) {
output.append(delimiterWrapper, resultWrapper);
} else {
let toRemove = get(`.${resultClass}`, output);
[toRemove.previousElementSibling, toRemove].forEach((el) => el.remove());
}
};
getAll('input[type=checkbox]').forEach(
(el) => el.addEventListener('change', checkboxHandler)
);
form {
--labelSize: 3rem;
}
fieldset {
--accent: palegreen;
display: inline grid;
gap: 0.5rem;
grid-auto-rows: var(--labelSize);
grid-template-columns: repeat(var(--columnCount, 3), var(--labelSize));
}
label {
border: 1px solid currentColor;
display: grid;
padding: 0.25rem;
text-align: center;
}
label input {
accent-color: var(--accent, unset);
order: 1;
}
input:checked+span {
background-image: linear-gradient( 90deg, aqua, var(--accent, transparent));
}
.result {
border: 1px solid currentColor;
display: flex;
flex-flow: row wrap;
gap: 0.25rem;
grid-column: span 3;
padding-block: 0.25rem;
padding-inline: 0.5rem;
}
.result .delimiter:first-child {
display: none;
}
<form action="#">
<fieldset>
<legend>Group 1</legend>
<label>
<!-- adding the "array-indicator" to the "name" attribute,
and adding an additional data-name custom attribute: -->
<input type="checkbox" value="1" name="group-1[]" data-name="group-1">
<span class="labelText">1</span>
</label>
<label>
<input type="checkbox" value="2" name="group-1[]" data-name="group-1">
<span class="labelText">2</span>
</label>
<label>
<input type="checkbox" value="3" name="group-1[]" data-name="group-1">
<span class="labelText">3</span>
</label>
<label>
<input type="checkbox" value="4" name="group-1[]" data-name="group-1">
<span class="labelText">4</span>
</label>
<label>
<input type="checkbox" value="5" name="group-1[]" data-name="group-1">
<span class="labelText">5</span>
</label>
<label>
<input type="checkbox" value="6" name="group-1[]" data-name="group-1">
<span class="labelText">6</span>
</label>
<label>
<input type="checkbox" value="7" name="group-1[]" data-name="group-1">
<span class="labelText">7</span>
</label>
<label>
<input type="checkbox" value="8" name="group-1[]" data-name="group-1">
<span class="labelText">8</span>
</label>
<label>
<input type="checkbox" value="9" name="group-1[]" data-name="group-1">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 2</legend>
<label>
<input type="checkbox" value="1" name="group-2[]" data-name="group-2">
<span class="labelText">1</span>
</label>
<label>
<input type="checkbox" value="2" name="group-2[]" data-name="group-2">
<span class="labelText">2</span>
</label>
<label>
<input type="checkbox" value="3" name="group-2[]" data-name="group-2">
<span class="labelText">3</span>
</label>
<label>
<input type="checkbox" value="4" name="group-2[]" data-name="group-2">
<span class="labelText">4</span>
</label>
<label>
<input type="checkbox" value="5" name="group-2[]" data-name="group-2">
<span class="labelText">5</span>
</label>
<label>
<input type="checkbox" value="6" name="group-2[]" data-name="group-2">
<span class="labelText">6</span>
</label>
<label>
<input type="checkbox" value="7" name="group-2[]" data-name="group-2">
<span class="labelText">7</span>
</label>
<label>
<input type="checkbox" value="8" name="group-2[]" data-name="group-2">
<span class="labelText">8</span>
</label>
<label>
<input type="checkbox" value="9" name="group-2[]" data-name="group-2">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 3</legend>
<label>
<input type="checkbox" value="1" name="group-3[]" data-name="group-3">
<span class="labelText">1</span>
</label>
<label>
<input type="checkbox" value="2" name="group-3[]" data-name="group-3">
<span class="labelText">2</span>
</label>
<label>
<input type="checkbox" value="3" name="group-3[]" data-name="group-3">
<span class="labelText">3</span>
</label>
<label>
<input type="checkbox" value="4" name="group-3[]" data-name="group-3">
<span class="labelText">4</span>
</label>
<label>
<input type="checkbox" value="5" name="group-3[]" data-name="group-3">
<span class="labelText">5</span>
</label>
<label>
<input type="checkbox" value="6" name="group-3[]" data-name="group-3">
<span class="labelText">6</span>
</label>
<label>
<input type="checkbox" value="7" name="group-3[]" data-name="group-3">
<span class="labelText">7</span>
</label>
<label>
<input type="checkbox" value="8" name="group-3[]" data-name="group-3">
<span class="labelText">8</span>
</label>
<label>
<input type="checkbox" value="9" name="group-3[]" data-name="group-3">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
<fieldset>
<legend>Group 4</legend>
<label>
<input type="checkbox" value="1" name="group-4[]" data-name="group-4">
<span class="labelText">1</span>
</label>
<label>
<input type="checkbox" value="2" name="group-4[]" data-name="group-4">
<span class="labelText">2</span>
</label>
<label>
<input type="checkbox" value="3" name="group-4[]" data-name="group-4">
<span class="labelText">3</span>
</label>
<label>
<input type="checkbox" value="4" name="group-4[]" data-name="group-4">
<span class="labelText">4</span>
</label>
<label>
<input type="checkbox" value="5" name="group-4[]" data-name="group-4">
<span class="labelText">5</span>
</label>
<label>
<input type="checkbox" value="6" name="group-4[]" data-name="group-4">
<span class="labelText">6</span>
</label>
<label>
<input type="checkbox" value="7" name="group-4[]" data-name="group-4">
<span class="labelText">7</span>
</label>
<label>
<input type="checkbox" value="8" name="group-4[]" data-name="group-4">
<span class="labelText">8</span>
</label>
<label>
<input type="checkbox" value="9" name="group-4[]" data-name="group-4">
<span class="labelText">9</span>
</label>
<output class="result" style="--delimiter: -;"></output>
</fieldset>
</form>
JS Fiddle demo.
References:
CSS:
background-image.
CSS custom properties.
CSS logical properties.
display.
:first-child.
gap.
grid-auto-rows.
grid-template-columns.
linear-gradient().
order.
radial-gradient().
padding.
padding-block.
padding-inline.
repeat().
text-align.
var().
JavaScript:
Array literals.
Array.prototype.filter().
Array.prototype.forEach().
Array.prototype.map().
document.createElement().
document.querySelector().
document.querySelectorAll().
Element.append().
Element.previousElementSibling().
Element.remove().
Element.querySelector().
Element.querySelectorAll().
EventTarget.addEventListener().
HTMLElement.dataset API.
Object.assign().
Template literals.
|
76384666 | 76385966 | I am building a docker-compose.yml file inside a workspace, but when I try to run docker-compose, I can't start my services because of the following error ENOENT: no such file or directory, open '/entity-service/package.json' the same error happens when trying to start the agent-portal-service container as well but the error happens in this directory /agent-portal-service/package.json. But my projects indeed have a package.json file, I think that this is related to the context that docker-compose is running.
Here is my workspace structure:
├── agent-portal-service
├── data
├── docker-compose.yml
└── entity-service
My docker-compose.yml:
version: "3"
services:
agent_portal_service:
working_dir: /agent-portal-service
container_name: agent_portal_service
image: node:16
volumes:
- /agent-workspace:/agent-portal-service
ports:
- 3100:3100
command: npm run start:debug
tty: true
entity_service:
working_dir: /entity-service
container_name: entity_service
image: node:16
volumes:
- /agent-workspace:/entity-service
ports:
- 3101:3101
command: npm run start:debug
tty: true
depends_on:
- mongodb
mongodb:
container_name: mongodb
image: mongo:4.4.6
ports:
- 27017:27017
volumes:
- ./data/db:/data/db
command: mongod --port 27017
restart: always
I am expecting to be able to run the agent_portal_service and entity_service containers successfully
| Can't find services package.json when running docker-compose | The important parts of your entity-service definition look like this
entity_service:
working_dir: /entity-service
volumes:
- /agent-workspace:/entity-service
command: npm run start:debug
You map /agent-workspace to /entity-service in the volumes section. You also set your working directory to /entity-service. In effect, your working directory is /agent-workspace on your host machine. There's no package.json file in the /agent-workspace directory, so your npm command fails.
To fix it, I'd map /agent-workspace/entity-service to /entity-service instead. That way, you'll have a package.json file in your /entity-service directory inside the container.
Like this
volumes:
- /agent-workspace/entity-service:/entity-service
|
76382916 | 76384251 | I have two functions that take a single options argument with differing properties, except for type, which is used to identify the function.
type FuncAOptions = {
type: 'A'
opt1: string
opt2: boolean
}
function funcA(options: FuncAOptions): number {
if (!options.opt1) throw new Error('Missing required option')
return 1
}
type FuncBOptions = {
type: 'B'
opt3: number
opt4: (a: number) => number
}
function funcB(options: FuncBOptions): string {
if (!options.opt3) throw new Error('Missing required option')
return 'B'
}
I then have a map of these functions, with an associated mapped type, so that I can call the functions conditionally with variable runtime data.
type AllFunctions = FuncAOptions | FuncBOptions
type FunctionMap = { [K in AllFunctions['type']]: (options: any) => any; }
const functionMap: FunctionMap = {
A: funcA,
B: funcB
}
function callFunction(type: keyof typeof functionMap, options: any) {
return functionMap[type](options)
}
When I call the functions directly, I get the proper type-checking to know if I'm passing an incorrect set of options. I want to be able to do the same when calling the function through the intermediary method.
callFunction('A', { type: 'B', opt3: 'Hello' }) // NO TS ERROR
funcA({ type: 'B', opt3: 'Hello' }) // TS ERROR: The expected type comes from property 'type' which is declared here on type 'FuncAOptions'
I like having the map typed with K in AllFunctions['type'] because when I add a function to AllFunctions, I am reminded that I need to add the key-value pair to functionMap.
Full example here
| Specify function parameter types in TypeScript mapped trype | If you want functionMap[type](options) to type check without loosening things up with type assertions or the any type, then you'll need to write it terms of generic indexes into a base key-value type or mapped types of that type. This is as described in microsoft/TypeScript#47109.
Essentially you want type to be seen as some generic type K, and functionMap to be seen as some mapped type like {[P in keyof FuncOptionMap]: (arg: FuncOptionMap[P]) => FuncRetMap[P]} and for options to be seen as type FuncOptionMap[K]. Then the compiler can conclude that the function functionMap[type] is of type (arg: FuncOptionMap[K]) => FuncRetMap[K], and is therefore callable with options as an argument, and will return the corresponding return type FuncRetMap[K]. So we'll need to define FuncOptionMap and FuncRetMap in terms of the values you have.
You might have hoped that you could have done this with type merely being the union type equivalent to keyof FuncOptionMap without needing generics. But TypeScript can't follow that sort of logic, as described in microsoft/TypeScript#30581. The recommended approach is to use generic indexing into mapped types instead; indeed microsoft/TypeScript#47109 is the solution to microsoft/TypeScript#30581 (or at least the closest we have to a solution).
It could look like this:
const _functionMap = {
A: funcA,
B: funcB
}
type FuncOptionMap =
{ [K in keyof typeof _functionMap]: Parameters<typeof _functionMap[K]>[0] }
type FuncRetMap =
{ [K in keyof typeof _functionMap]: ReturnType<typeof _functionMap[K]> }
const functionMap: { [K in keyof FuncOptionMap]: (options: FuncOptionMap[K]) => FuncRetMap[K] }
= _functionMap
Essentially we rename your functionMap to _functionMap and then later assign it back with the FuncOptionMap and FuncRetMap types computed from it. It might look like a no-op, since the type of _functionMap and the type of functionMap appear to be identical. But this is actually very important; the compiler can only follow the logic when things are written as this mapping. If you try to use _functionMap instead of functionMap in what follows, the compiler will lose the thread and output an error.
Continuing:
function callFunction<K extends keyof FuncOptionMap>(
type: K, options: FuncOptionMap[K]
) {
return functionMap[type](options); // okay
}
That type checks, and the return type is FuncRetMap[K]. Now you get the error you expected:
callFunction('A', { type: 'B', opt3: 'Hello' }) // error!
// ---------------------> ~~~ B is not A
and when you call it the compiler knows how the output type as depends on the input type:
console.log(callFunction(
"B",
{ type: "B", opt3: 1, opt4: x => x + 1 }
).toLowerCase()); // okay, the compiler knows it's string and not number
Playground link to code
|
76381320 | 76384140 | I'm doing a pet project of a social network and I'm having problems with authorization.I getting Access Denied when I send a request with a jwt token.
I am doing the following chain of actions:
Registration where I specify the login email and password (Works well)
Authorization where by login and password I get a token (Works well)
I'm trying to send a request in the postman with the received token where I get an error(Access Denied)
Whats wrong with my code?
My Security Config:
@Configuration
@EnableWebSecurity
@EnableMethodSecurity( proxyTargetClass = true)
public class WebSecurityConfig {
@Autowired
UserDetailsServiceImpl userDetailsService;
@Autowired
private AuthEntryPointJwt unauthorizedHandler;
@Bean
public AuthTokenFilter authenticationJwtTokenFilter() {
return new AuthTokenFilter();
}
@Bean
public DaoAuthenticationProvider authenticationProvider() {
DaoAuthenticationProvider authProvider = new DaoAuthenticationProvider();
authProvider.setUserDetailsService(userDetailsService);
authProvider.setPasswordEncoder(passwordEncoder());
return authProvider;
}
@Bean
public AuthenticationManager authenticationManager(AuthenticationConfiguration authConfig) throws Exception {
return authConfig.getAuthenticationManager();
}
@Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.cors().and().csrf().disable()
.exceptionHandling()
.authenticationEntryPoint(unauthorizedHandler)
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and()
.authorizeHttpRequests()
.requestMatchers("/auth/**").permitAll()
.requestMatchers("/swagger/**").permitAll()
.requestMatchers("/swagger-ui/**").permitAll()
.requestMatchers("/v3/api-docs/**").permitAll()
.requestMatchers("/auth/test/**").permitAll()
.requestMatchers("/h2/**").permitAll()
.anyRequest().authenticated();
http.authenticationProvider(authenticationProvider());
http.addFilterBefore(authenticationJwtTokenFilter(), UsernamePasswordAuthenticationFilter.class);
return http.build();
}
}
AythTokenFilter:
@Slf4j
public class AuthTokenFilter extends OncePerRequestFilter {
@Autowired
private JwtUtils jwtUtils;
@Autowired
private UserDetailsServiceImpl userDetailsService;
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
try {
String accessToken = parseJwt(request);
if (accessToken != null && jwtUtils.validateJwtToken(accessToken)) {
String username = jwtUtils.getUserNameFromJwtToken(accessToken);
UserDetails userDetails = userDetailsService.loadUserByUsername(username);
UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken(userDetails, null,
userDetails.getAuthorities());
authentication.setDetails(new WebAuthenticationDetailsSource().buildDetails(request));
SecurityContextHolder.getContext().setAuthentication(authentication);
}
} catch (Exception e) {
log.error("Cannot set user authentication: {}", e.getMessage());
}
filterChain.doFilter(request, response);
}
private String parseJwt(HttpServletRequest request) {
String headerAuth = request.getHeader("Authorization");
if (StringUtils.hasText(headerAuth) && headerAuth.startsWith("Bearer ")) {
return headerAuth.substring(7, headerAuth.length());
}
return null;
}
}
JWTUtils:
@Component
@Slf4j
public class JwtUtils {
@Value("${jwt.token.secret}")
private String jwtSecret;
@Value("${jwt.token.jwtExpirationMs}")
private int jwtExpirationMs;
public String generateJwtToken(UserDetailsImpl userPrincipal) {
return generateTokenFromUsername(userPrincipal.getUsername());
}
public String generateTokenFromUsername(String username) {
return Jwts.builder().setSubject(username).setIssuedAt(new Date())
.setExpiration(new Date((new Date()).getTime() + jwtExpirationMs)).signWith(SignatureAlgorithm.HS512, jwtSecret)
.compact();
}
public String getUserNameFromJwtToken(String token) {
return Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(token).getBody().getSubject();
}
public boolean validateJwtToken(String authToken) {
try {
Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(authToken);
return true;
} catch (SignatureException e) {
log.error("Invalid JWT signature: {}", e.getMessage());
} catch (MalformedJwtException e) {
log.error("Invalid JWT token: {}", e.getMessage());
} catch (ExpiredJwtException e) {
log.error("JWT token is expired: {}", e.getMessage());
} catch (UnsupportedJwtException e) {
log.error("JWT token is unsupported: {}", e.getMessage());
} catch (IllegalArgumentException e) {
log.error("JWT claims string is empty: {}", e.getMessage());
}
return false;
}
}
Controller method:
@GetMapping("/requests")
@PreAuthorize("hasRole('ROLE_USER')")
public UsernamesResponse getFriendRequests() {
return userService.getFriendRequests();
}
Stacktrace:
org.springframework.security.access.AccessDeniedException: Access Denied
at org.springframework.security.authorization.method.AuthorizationManagerBeforeMethodInterceptor.attemptAuthorization(AuthorizationManagerBeforeMethodInterceptor.java:257) ~[spring-security-core-6.0.0.jar:6.0.0]
at org.springframework.security.authorization.method.AuthorizationManagerBeforeMethodInterceptor.invoke(AuthorizationManagerBeforeMethodInterceptor.java:198) ~[spring-security-core-6.0.0.jar:6.0.0]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) ~[spring-aop-6.0.2.jar:6.0.2]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:752) ~[spring-aop-6.0.2.jar:6.0.2]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:703) ~[spring-aop-6.0.2.jar:6.0.2]
at ru.effectivemobile.socialnetwork.controller.UserController$$SpringCGLIB$$0.getFriendRequests(<generated>) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:207) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:152) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:884) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1080) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:973) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1003) ~[spring-webmvc-6.0.2.jar:6.0.2]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:895) ~[spring-webmvc-6.0.2.jar:6.0.2]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:705) ~[tomcat-embed-core-10.1.1.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:880) ~[spring-webmvc-6.0.2.jar:6.0.2]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:814) ~[tomcat-embed-core-10.1.1.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:223) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:110) ~[spring-web-6.0.2.jar:6.0.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:365) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:100) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:131) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:85) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at ru.effectivemobile.socialnetwork.security.jwt.AuthTokenFilter.doFilterInternal(AuthTokenFilter.java:47) ~[classes/:na]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.context.SecurityContextHolderFilter.doFilterInternal(SecurityContextHolderFilter.java:69) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) ~[spring-security-web-6.0.0.jar:6.0.0]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:351) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267) ~[spring-web-6.0.2.jar:6.0.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.0.2.jar:6.0.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.2.jar:6.0.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:119) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:400) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-10.1.1.jar:10.1.1]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
EDIT 1
Jwt token example:
eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJ0ZXN0IiwiaWF0IjoxNjg1NjIyMjc4LCJleHAiOjE2ODU2MjU4Nzh9.A_gw2V1c403vxANRcO5LCU7621TMYvYeBKKb-YJv_ZTjEPKym140YAlIjnqAOhoQtfhKRm2O-pJke-7zzglzSg
EDIT 2
| Spring Security Access Denied with Spring Boot 3.0 | Your problem relates to the difference in Spring between ROLES and Authorities and the Spring hack that treats Roles as Authorities prefixed with "ROLE_"
In order to use the annotation @PreAuthorize("hasRole('ROLE_USER')") you need a Permission as follows:
ROLE_USER("ROLE_USER")
Which is related to your ERole enum as follows:
ROLE_USER(Set.of(Permission.ROLE_USER, Permission.READ))
If you are only using the Permission.READ, etc for Controller level authority checking then drop them.
Alternatively use annotation like this @PreAuthorize("hasAuthotity('READ')")and drop the roles.
I have made changes (on branch fix/authorities at https://github.com/KoosieDeMoer/social-network.git) with stubbed out repo that work.
|
76381531 | 76384167 | I have a Wix installer.
In the Product.wxs file, I have the below piece of code:
<Product Id="*"
Name="$(var.PRODUCT_NAME)"
Language="1033"
Version="!(bind.FileVersion.myDLLfile)"
Manufacturer="$(var.CompanyName)"
UpgradeCode="{D00BA432-7798-588A-34DF-34A65378FD45}">
In the Features.wxs, I have the myDLLfile defined as component:
<Component Id="myDLLfile"
Guid="{30550881-053F-768D-88B7-BB9853B23C51}">
<File Id="myDLLfile"
Source="$(var.dllDir)\myDLLfile.dll"
KeyPath="yes"
Checksum="yes"/>
</Component>
Now, I would like to know if from a custom action in C# I can get that same Product Version (which corresponds to the version of the myDllfile.dll). Is it possible? If so, how?
| Get product version from a custom action | The easiest way is probably to store the value in a Property and read that property in the custom action. The bind variable syntax would make setting the property easy.
<Property Id="ThisIsASillyThingToNeedToDo" Value="!(bind.FileVersion.myDLLfile)" />
|
76384063 | 76385989 | I have the following query.
SELECT *
FROM user u
LEFT JOIN operator o ON o.id = u.id
WHERE u.user_type_id IN (2,4) AND u.is_enabled = 1 AND u.office_id = 225
If I run explain on the query above, it shows that it uses the index IX_user_type for the table user.
If I just change the office_id comparison value like the following, the execution plan changes.
SELECT *
FROM user u
LEFT JOIN operator o ON o.id = u.id
WHERE u.user_type_id IN (2,4) AND u.is_enabled = 1 AND u.office_id = 32365487
In this case, the explain shows that the indexes used for the table user are fk_user_office,IX_user_is_enabled.
I made some tests and would say, performance wise, the first execution plan is much better than the second one. Now, I know I can force Mysql to use the index I want but, I would like to understand why this happens. Why would Mysql pick an index instead of another based on a query parameter?
| Why does Mysql change query execution plan based on query parameters? | MySQL may decide not to use the index on office_id if the value you are searching for is too common.
By analogy, why doesn't a book include common words like "the" in the index at the back of the book? Because such common words occur on a majority of pages in the book. It's unnecessary to keep a list of those pages under the respective word in the index, because it's easier to tell the reader to read all the pages in the book, without the index lookup.
Similarly, if MySQL estimates that a given value you are searching for occurs on a high enough portion of the pages, it looks for another index if you have other conditions, and if none are found, then it resorts to a table-scan.
In this case, I'd ask if you can confirm that office_id 225 is very common in this table.
One more thought: The best index of all for the query you show would be a compound index on (office_id, is_enabled, user_type). Then it would be able to use that index to narrow down the search by all three columns at once.
You might like my presentation How to Design Indexes, Really or the video. I also have a chapter on index design in my book SQL Antipatterns, Volume 1:
Avoiding the Pitfalls of Database Programming.
|
76382547 | 76384317 | How can I modify this script to be able to see/print some of the results and write the output in JSON :
from google.cloud import bigquery
def query_stackoverflow(project_id="gwas-386212"):
client = bigquery.Client()
query_job = client.query(
"""
WITH
SNP_info AS (
SELECT
CONCAT(CAST(rs_id AS string)) AS identifier
FROM
`gwas-386212.gwas_dataset_1.SNPs_intergenic_vep_pha005199`)
SELECT
*
FROM
SNP_info
JOIN (
SELECT
CONCAT(CAST(rs_id AS string)) AS identifier,
chr_id AS chr_id,
position AS position,
ref_allele AS ref,
alt_allele AS alt,
most_severe_consequence AS most_severe_consequence,
gene_id_any_distance AS gene_id_any_distance,
gene_id_any AS gene_id_any,
gene_id_prot_coding_distance AS gene_id_prot_coding_distance,
gene_id_prot_coding AS gene_id_prot_coding
FROM
`bigquery-public-data.open_targets_genetics.variants`) variants
ON
SNP_info.identifier = variants.identifier"""
)
results = client.query(query)
for row in results:
title = row['identifier']
identifier = row['identifier']
#print(f'{identifier}')
This is just printing a column the intentifier. i want to save the resulted table in JSON format. The JSOn from the google cloud platform should look something like this:
[{
"identifier": "rs62063022",
"identifier_1": "rs62063022",
"chr_id": "17",
"position": "51134537",
"ref": "T",
"alt": "G",
"most_severe_consequence": "intergenic_variant",
"gene_id_any_distance": "13669",
"gene_id_any": "ENSG00000008294",
"gene_id_prot_coding_distance": "13669",
"gene_id_prot_coding": "ENSG00000008294"
}, {
"identifier": "rs12944420",
"identifier_1": "rs12944420",
"chr_id": "17",
"position": "42640692",
"ref": "T",
"alt": "C",
"most_severe_consequence": "intergenic_variant",
"gene_id_any_distance": "18592",
"gene_id_any": "ENSG00000037042",
"gene_id_prot_coding_distance": "18592",
"gene_id_prot_coding": "ENSG00000037042"
},
| Save the output of bigquery in JSON from python | Check out json documentation for further information.
records = [dict(row) for row in results]
out_file = open("bigquery_response.json", "w")
json.dump(records , out_file, indent = 6)
out_file.close()
|
76381598 | 76384253 | Now I'm using UIImage sync extension.
struct PostView: View {
let url: String
var body: some View {
PrivateImageView(image: UIImage(url:url))
}
}
extension UIImage {
public convenience init(url: String) {
let url = URL(string: url)
do {
let data = try Data(contentsOf: url!)
self.init(data: data)!
return
} catch let err {
print("Error : \(err.localizedDescription)")
}
self.init()
}
}
When I post a image, I got the error Synchronous URL loading of http://local.host/images/123.jpeg should not occur on this application's main thread as it may lead to UI unresponsiveness. Please switch to an asynchronous networking API such as URLSession. at try Data(contentsOf: url!).
In PostView I use PrivateImageView.
To use the view I have to designate the argument like this PrivateImageView(image: UIImage(xxxxxxxxx)).
I mean I have to use UIImage() not AsyncImage.
I don't know how to change convenience init to adjust to PrivateImageView.
Please tell me how to use an async function in this context.
| How to rewrite sync method using SwiftUI? | There is no way to get a data from the inter/intranet synchronously you have to use an async method and account for the time it takes to download.
extension String {
public func getUIImage() async throws -> UIImage {
guard let url = URL(string: self) else {
throw URLError(.badURL)
}
let (data, response) = try await URLSession.shared.data(from: url)
guard let httpResponse = response as? HTTPURLResponse else {
throw URLError(.badServerResponse)
}
guard httpResponse.statusCode == 200 else {
throw URLError(URLError.Code(rawValue: httpResponse.statusCode))
}
guard let image = UIImage(data: data) else {
throw URLError(.fileDoesNotExist)
}
return image
}
}
extension UIImage {
static public func fromURL(url: String) async throws -> UIImage {
let image = try await url.getUIImage()
return image
}
}
You can rewrite PostView to something like.
struct PostView: View {
let url: String
@State private var uiImage: UIImage?
var body: some View {
Group{
if let uiImage {
PrivateImageView(image: uiImage)
} else {
ProgressView() //Show this while downloading
.task {
do {
self.uiImage = try await url.getUIImage()
// or
// self.uiImage = try await UIImage.fromURL(url: url)
} catch {
print(error)
}
}
}
}
}
}
|
76385236 | 76386062 | I have am setting up a very simple test site on my localhost under IIS which needs to be accessible locally from https. IU have followed the steps below:
In IIS for my local server, I have created a self signed certificate and stored in the "Personal" store
I have added a https binding for my test site to this new certificate, the hostname is testite and the port 7001
Added the entry '127.0.0.1 testsite' to hosts file.
When I try to access https://testsite/index.html through the browser, the browser returns the following error:
NET::ERR_CERT_COMMON_NAME_INVALID
Same problem soccurs if I add the port number to the url, i.e: https://testsite:7001/index.html
More informatiom on the error shows the following:
This server could not prove that it is testsite; its security certificate is from Muzz2. This may be caused by a misconfiguration or an attacker intercepting your connection.
| Self signed certificate not working on localhost IIS | The solution involved using Powershell rather than IIS manager to generate the self signed certificate. ISS always used the machine name rather than the sitename as the common name.
The powershell command I used was as follows:
New-SelfSignedCertificate -DnsName testsite -CertStoreLocation cert:\LocalMachine\My
After which mmc was used to place the certificate in the Trusted Root Certification Authorities, and the certificated bindings updated in IIS as you you usually would.
|
76378736 | 76384351 | I am trying to use XE7 to connect to an in-house REDCap server. REDCap has a detailed description of the API at https://education.arcus.chop.edu/redcap-api/ and a test server at https://bbmc.ouhsc.edu/redcap/api with a test token key. There is assistance at https://mran.microsoft.com/snapshot/2015-08-18/web/packages/REDCapR/vignettes/TroubleshootingApiCalls.html in R.
I can connect to the test site with Curl and PostMan. My problem is how to implement this in Delphi with SSL.
The Curl script from PostMan:
curl --location 'https://bbmc.ouhsc.edu/redcap/api/' \
--data-urlencode 'token=9A81268476645C4E5F03428B8AC3AA7B' \
--data-urlencode 'content=record' \
--data-urlencode 'action=export' \
--data-urlencode 'format=csv' \
--data-urlencode 'rawOrLabel=label'
After much searching, this is my Delphi code. What have I missed? IdLogFile1 is a component on the form.
function TForm1.IdSSLIOHandlerSocketOpenSSL1VerifyPeer(Certificate: TIdX509; AOk: Boolean; ADepth, AError: Integer): Boolean;
begin
showmessage('at IOhandler');
Result := true; // always returns true
end;
procedure TForm1.idHTTP2BtnClick(Sender: TObject);
var post : string;
Params : TStringList;
idHTTP : TIdHTTP;
SSL1 : TIdSSLIOHandlerSocketOpenSSL;
status : integer;
response : TstringStream;
begin
params := TStringList.Create;
idHTTP := TIdHTTP.Create(nil);
SSL1 := TIdSSLIOHandlerSocketOpenSSL.Create(idHTTP);
response := TstringStream.create;
SSL1.SSLOptions.Mode := sslmClient ;
SSL1.SSLOptions.SSLVersions := [sslvTLSv1, sslvTLSv1_1, sslvTLSv1_2 ];// [ sslvSSLv3, sslvSSLv23,sslvSSLv2, sslvTLSv1, sslvTLSv1_1, sslvTLSv1_2];
SSL1.SSLOptions.VerifyDepth := 0;
SSL1.OnVerifyPeer := IdSSLIOHandlerSocketOpenSSL1VerifyPeer;
SSL1.SSLOptions.VerifyMode := [ ];
idHTTP.IOHandler := SSL1;
memo1.Lines.clear;
idHTTP.ReadTimeout := 3000;
idHTTP.ConnectTimeout := 3000;
idHttp.Request.BasicAuthentication := false;
try
idHTTP.HandleRedirects := true;
idHTTP.Intercept := IdLogFile1;
IdLogFile1.Active := true;
IdHttp.Request.CustomHeaders.Clear;
IdHttp.Request.CustomHeaders.Values['token'] := '9A81268476645C4E5F03428B8AC3AA7B';
IdHttp.Request.CustomHeaders.Values['content'] := 'record';
IdHttp.Request.CustomHeaders.Values['action'] := 'export';
IdHttp.Request.CustomHeaders.Values['format'] := 'csv';
IdHttp.Request.CustomHeaders.Values['rawOrLabel'] := 'label';
IdHttp.Request.CustomHeaders.Values['verify_ssl'] := 'false';
IdHttp.Request.CustomHeaders.Values['ssl_verify'] := 'false'; //various verify options ?
IdHttp.Request.CustomHeaders.Values['ssl_verifypeer'] := 'false';
idHTTP.Request.ContentType := 'application/x-www-form-urlencoded';
IdHTTP.Request.Charset := 'utf-8';
idHTTP.HTTPOptions := [hoKeepOrigProtocol, hoForceEncodeParams];
idHTTP.Post('https://bbmc.ouhsc.edu/redcap/api/', params, response );
finally
memo1.Lines.add(' ');
memo1.lines.add(idHTTP.ResponseText);
memo1.Lines.add(' ');
status := idHTTP.ResponseCode;
memo1.Lines.Add('code: ' + inttostr(status));
idhttp.Disconnect;
end;
Params.Free;
SSL1.Free;
idHTTP.Free;
response.Free;
end;
| How do I implement SSL in Delphi to connect to a REDCap API server? | You are setting up the TLS connection correctly (provided the appropriate OpenSSL DLLs are available where Indy can find them).
What you are not setting up correctly is your data parameters. Curl's --data-urlencode command puts the data in the HTTP request body, not in the HTTP headers. So you need to put the data in the TStringList that you are posting (TIdHTTP will handle the url-encoding for you).
Try this instead:
procedure TForm1.idHTTP2BtnClick(Sender: TObject);
var
params : TStringList;
idHTTP : TIdHTTP;
idSSL : TIdSSLIOHandlerSocketOpenSSL;
status : integer;
response : string;
begin
params := TStringList.Create;
try
idHTTP := TIdHTTP.Create(nil);
try
idSSL := TIdSSLIOHandlerSocketOpenSSL.Create(idHTTP);
idSSL.SSLOptions.Mode := sslmClient ;
idSSL.SSLOptions.SSLVersions := [sslvTLSv1, sslvTLSv1_1, sslvTLSv1_2 ];
idSSL.SSLOptions.VerifyDepth := 0;
idSSL.OnVerifyPeer := IdSSLIOHandlerSocketOpenSSL1VerifyPeer;
idSSL.SSLOptions.VerifyMode := [ ];
idHTTP.IOHandler := idSSL;
Memo1.Lines.Clear;
idHTTP.ReadTimeout := 3000;
idHTTP.ConnectTimeout := 3000;
idHTTP.Request.BasicAuthentication := false;
try
idHTTP.HandleRedirects := true;
idHTTP.Intercept := IdLogFile1;
IdLogFile1.Active := true;
params.Add('token=9A81268476645C4E5F03428B8AC3AA7B');
params.Add('content=record');
params.Add('action=export');
params.Add('format=csv');
params.Add('rawOrLabel=label');
idHTTP.Request.ContentType := 'application/x-www-form-urlencoded';
idHTTP.Request.Charset := 'utf-8';
idHTTP.HTTPOptions := [hoKeepOrigProtocol, hoForceEncodeParams];
response := idHTTP.Post('https://bbmc.ouhsc.edu/redcap/api/', params);
finally
Memo1.Lines.Add(' ');
Memo1.Lines.Add(idHTTP.ResponseText);
Memo1.Lines.Add(' ');
status := idHTTP.ResponseCode;
Memo1.Lines.Add('code: ' + IntToStr(status));
end;
finally
idHTTP.Free;
end;
finally
params.Free;
end;
end;
|
76381369 | 76384486 | In my program I have to import data from a remote sqlserver database.
I am using ASP.NET MVC 5 and EF6 Code First (I'm new with EF and MVC 5).
First, I copy data from a remote view to a table in the local database.
For this part I use this code in the action method (names are in italian):
using (var source = new TimeWebDBContext())
{
using (var target = new GESTPrefContext())
{
// 1 - Truncate table AnagraficaTimeWeb is exists
target.Database.ExecuteSqlCommand("IF EXISTS(SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='AnagraficaTimeWeb') TRUNCATE TABLE AnagraficaTimeWeb");
// 2 - Copy from remote view to AnagraficaTimeWeb table
var dati_importati = from i in source.VW_PREFNO_ANAGRAFICORUOLO
select new AnagraficaTimeWeb()
{
Matricola = i.MATRICOLA,
Cognome = i.COGNOME,
Nome = i.NOME,
Sesso = i.SESSO,
Email = i.EMAIL,
IdRuolo = i.IDRUOLO,
Ruolo = i.RUOLO,
DataInizio = i.DATAINIZIO,
DataFine = i.DATAFINE,
DataFineRapporto = i.DATALICENZ,
DataUltimaImportazione = DateTime.Now
};
target.DatiAnagraficaTimeWeb.AddRange(dati_importati.ToList());
target.SaveChanges();
}
}
The view returns a list of employees with their role.
Roles have to be imported in a distinct local table called PROFILO, while employees data are saved in the IMPIEGATO table.
The remaining part of the import process consists of :
a) insert new data in the PROFILO table (data already saved are ignored)
b) update of employee data already present in the local IMPIEGATO table (name, email,etc. are overwritten)
c) insert new empoyee not yet in the IMPIEGATO table.
since I’m new to EF6 I thought I’d use SQL code .
In my opinion the possible solutions are :
execute SQL code directly in the actionmethod with db.Database.ExecuteSqlCommand
This is the code i write:
code for point a)
StringBuilder sql = new StringBuilder();
sql.AppendLine("INSERT INTO PROFILO (IdTimeWeb, Descrizione, Ordinamento, Stato, Datainserimento)");
sql.AppendLine(" SELECT DISTINCT IdRuolo, Ruolo, 1," + ((int)EnumStato.Abilitato) + ",'" + DateTime.Now.ToShortDateString()+"'");
sql.AppendLine(" FROM AnagraficaTimeWeb i");
sql.AppendLine(" WHERE NOT EXISTS");
sql.AppendLine("(SELECT 1 FROM PROFILO p WHERE p.Descrizione = i.Ruolo)");
target.Database.ExecuteSqlCommand(sql.ToString());
code for point b)
sql.Clear();
sql.Append("UPDATE i " + Environment.NewLine);
sql.Append(" SET i.Cognome = a.Cognome" + Environment.NewLine);
sql.Append(" , i.Nome = a.Nome" + Environment.NewLine);
sql.Append(" , i.Sesso = a.Sesso" + Environment.NewLine);
sql.Append(" ,i.Email = a.Email" + Environment.NewLine);
sql.Append(" ,i.DataModifica = '" + DateTime.Now.ToShortDateString() +"'"+ Environment.NewLine);
sql.Append(" FROM Impiegato i " + Environment.NewLine);
sql.Append(" JOIN AnagraficaTimeWeb a on i.Matricola=a.Matricola " + Environment.NewLine);
sql.Append(" WHERE i.Stato =" + ((int)EnumStato.Abilitato) + Environment.NewLine);
target.Database.ExecuteSqlCommand(sql.ToString());
code for point c)
sql.Clear();
sql.Append("INSERT INTO IMPIEGATO(Cognome, Nome, Matricola, Sesso, Email, Stato, DataInserimento) " + Environment.NewLine);
sql.Append("SELECT a.Cognome" + Environment.NewLine);
sql.Append(", a.Nome" + Environment.NewLine);
sql.Append(", a.Matricola" + Environment.NewLine);
sql.Append(", a.Sesso" + Environment.NewLine);
sql.Append(", a.Email" + Environment.NewLine);
sql.Append("," + ((int)EnumStato.Abilitato )+ Environment.NewLine);
sql.Append(",'"+ DateTime.Now.ToShortDateString() +"'" + Environment.NewLine);
sql.Append(" FROM AnagraficaTimeWeb a " + Environment.NewLine);
sql.Append(" LEFT OUTER JOIN IMPIEGATO on a.Matricola = Impiegato.Matricola " + Environment.NewLine);
sql.Append(" WHERE Impiegato.Matricola is null" + Environment.NewLine);
target.Database.ExecuteSqlCommand(sql.ToString());
create a storedprocedure to call in the actionmetghod. in this case how to create the storedprocedure?
2.a) in the Up method of a migration ?
2.b) or running the storedprocedure creation script directly in the database (after it was first created ) and then run the stored from the action method?
| Stored procedure with EF6 Code first - best practice? | You can create stored procedure directly in db and write all three remaining part of your process (point a,b,c) in one SP (stored procedure). SP is stored as object file in sql so it's fast and sql server don't spend time in making execution plan and other extra things.
In order to create sp, you need to more familiar with sql statements and it's control structures etc. It will feel like learning new language. You can start with below link
https://learn.microsoft.com/en-us/sql/relational-databases/stored-procedures/stored-procedures-database-engine?view=sql-server-ver16
To save that stored procedure as migration script you can create a blank migration using
add-migration 'SPName' -IgnoreChanges
Then add your sp
public partial class SPName: DbMigration
{
public override void Up()
{
Sql(@"Create Stored Procedure script");
}
public override void Down()
{
Sql(@"Drop Stored Procedure script")
}
}
don't forget to change migration script every time you modify/alter sp. You can also make migration script of modified sp.
To execute your sp you can refer the below code snippet. SqlPatameter helps you to make query clean,you can avoid sql injection.
List<SqlParameter> sqlParms = new List<SqlParameter>
{
new SqlParameter { ParameterName = "@Id", Value = employee.EmployeeID },
new SqlParameter { ParameterName = "@FirstName ", Value = employee.FirstName },
new SqlParameter { ParameterName = "@LastName", Value = employee.LastName}
};
db.Database.ExecuteSqlRaw("EXEC dbo.spName @Id, @FirstName, @LastName" sqlParms.ToArray());
|
76384694 | 76386084 | For example, I can add definitions for C/C++ preprocessor with CMake
add_definitions(-DFOO -DBAR ...)
and then I can use them for conditional compilation
#ifdef FOO
code ...
#endif
#ifdef BAR
code ...
#endif
Is there a way to do the same thing with Zig and its build system using compilation arguments or something like that?
| How to do conditional compilation with Zig? | You can do something similar using the build system. This requires some boilerplate code to do the option handling. Following the tutorial on https://zig.news/xq/zig-build-explained-part-1-59lf for the build system and https://ziggit.dev/t/custom-build-options/138/8 for the option handling:
Create a separate file called build.zig that contains a function build():
const std = @import("std");
pub fn build(b: *std.build.Builder) !void {
const build_options = b.addOptions();
// add command line flag
// and set default value
build_options.addOption(bool, "sideways", b.option(bool, "sideways", "print sideways") orelse false);
// set executable name and source code
const exe = b.addExecutable("hello", "hello.zig");
exe.addOptions("build_options", build_options);
// compile and copy to zig-out/bin
exe.install();
}
Use the option for conditional compilation in a separate file hello.zig using @import("build_options"):
const std = @import("std");
pub fn main() !void {
const print_sideways = @import("build_options").sideways;
const stdout = std.io.getStdOut().writer();
if (print_sideways) {
try stdout.print("Sideways Hello, {s}!\n", .{"world"});
} else {
try stdout.print("Regular Hello, {s}!\n", .{"world"});
}
}
Compile with:
zig build -Dsideways=true
Executing zig-out/bin/hello gives the following output:
Sideways Hello, world!
|
76382999 | 76384407 | How can I configure my ASP.NET Core 6 Web API controllers to use AWS Cognito authorization?
This is the code I wrote in my program.cs file:
var AWSconfiguration = builder.Configuration.GetSection("AWS:Cognito");
var userPoolId = AWSconfiguration["UserPoolId"];
var clientId = AWSconfiguration["ClientId"];
var region = AWSconfiguration["Region"];
builder.Services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
options.Authority = $"https://cognito-idp.{region}.amazonaws.com/{userPoolId}";
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
ValidateIssuer = true,
ValidateAudience = true,
ValidIssuer = $"https://cognito-idp.{region}.amazonaws.com/{userPoolId}",
ValidAudience = clientId,
};
});
I'm getting this error:
www-authenticate: Bearer error="invalid_token",
error_description="The audience 'empty' is invalid"
I validated my clientID in the AWS console.
Thanks for the help
| Cognito JWT Authorize in ASP.NET Core 6 Web API | Cognito access tokens don't have an audience claim - though ideally they should. In other authorization servers, APIs check the received access token has the expected logical name, such as api.mycompany.com.
For Cognito you will need to configure .NET to not validate the audience, similar to this. Other token validation parameters are derived from the metadata endpoint derived from the issuer base URL:
private void ConfigureOAuth(IServiceCollection services)
{
services
.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
options.Authority = this.configuration.IssuerBaseUrl;
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateAudience = false,
};
});
services.AddAuthorization(options =>
{
options.FallbackPolicy = new AuthorizationPolicyBuilder().RequireAuthenticatedUser().Build();
});
}
The FallbackPolicy then ensures that authentication is applied globally, except for endpoints annotated with [AllowAnonymous].
|
76381827 | 76384517 | Im totaly new in wordpress php and cron. So, I have a task. I need to take data from a form and put it into a cron function.
This is me code. I created custom plugin page in wordpress admin panel and try to run this code.
actually the code works if I enter a article id instead of a variable $post_id;
function cron_add_one_minute( $schedules ) {
$schedules['one_minute'] = array(
'interval' => 60,
'display' => 'One in minute'
);
return $schedules;
};
if(!empty($_POST) && ($_POST['btnaup']) && !wp_next_scheduled( 'update_post' )) {
wp_schedule_event( time(), 'one_minute', 'update_post');
}
if(isset( $_POST['btnaup'])) {
$post_id = $_POST['id'];
$b = $_POST['days'];
}
add_action( 'update_post', 'update_my_post', 10, 1);
function update_my_post( $post_id ){
$time = current_time('mysql');
wp_update_post( array (
'ID' => $post_id,
'post_date' => $time,
'post_date_gmt' => get_gmt_from_date( $time ),
'post_modified' => $time,
'post_modified_gmt' => get_gmt_from_date($time),
) );
}
| How to pass arguments in cron job function | Per the docs for wp_schedule_event, the fourth parameter, which you are 't currently using, is $args
Array containing arguments to pass to the hook's callback function. Each value in the array is passed to the callback as an individual parameter.
The array keys are ignored.
Default: array()
This means you should be able to use:
if (!empty($_POST) && ($_POST['btnaup']) && !wp_next_scheduled('update_post')) {
$data = [
$_POST['id'],
];
wp_schedule_event(time(), 'one_minute', 'update_post', $data);
}
Your update_my_post function should just work then.
|
76381875 | 76384775 | I am executing a script which starts an executable proc1.exe. When proc1.exe is running, the batch file has to start another executable proc2.exe.
Example: File A.bat runs proc1.exe with following requirements.
When proc1.exe is running, proc2.exe should run.
When proc1.exe is closed, proc2.exe should be terminated.
I tried this code:
tasklist /fi "ImageName eq proc1.exe" /fo csv 2>NUL|find /I "proc1.exe">NUL
if "%ERRORLEVEL%"=="0"
echo proc1.exe is running
start /wait proc2.exe
else
taskkill /F /IM proc2.exe
When I run the script, the command window displays the error message:
The syntax of tasklist command is incorrect.
What is the issue with the above tasklist command line?
I am not sure whether the else part would be achieved. How do I come back to else part in the script and kill proc2.exe after proc1.exe is terminated.
How could this be achieved?
Here are more information after post of first version of Mofi´s answer.
This is what I have tried and I think, I am near to the solution but need your support.
Batch file A.bat is modified to:
start bin\proc1.exe 2>&1
rem proc1.exe is running. Proc1.exe is a GUI application with buttons and
rem widgets. The cmd window and the GUI application window are opened now.
start /wait proc2.exe 2>&1
rem proc2.exe is running now too. It is a Windows console application.
rem A second console window is opened for that reason.
rem Both proc1.exe and proc2.exe are running as processes.
rem Now I want to check whether Proc1.exe is running or not.
rem If it is running do nothing, else terminate proc2.exe.
rem I have written a loop for that purpose. The issue is that
rem the control is not coming to the loop.
rem When I close the GUI application, the ELSE part should
rem be executed and proc2.exe should be terminated/killed.
rem How do I bring the control to loop label or how
rem to signal proc1.exe is closed and run taskkill?
:loop
tasklist /FI "IMAGENAME eq proc1.exe" 2>NUL | find /I /N "proc1.exe">NUL
if "%ERRORLEVEL%"=="0" (
timeout /T 1 /NOBREAK >NUL
goto loop
) else (
taskkill /F /IM proc2.exe >NUL
)
| How to start a GUI executable and a console program and terminate the console application on GUI application closed by the user? | The task is very unclear.
Is proc1.exe started outside of the batch file or also by the batch file?
Is proc1.exe a Windows console or a Windows GUI application and does it open files for read/write operations or makes it registry reads/writes or does it open connections to other processes or even other devices?
Is proc2.exe a Windows console or a Windows GUI application and does it open files for read/write operations or makes it registry reads/writes or does it open connections to other processes or even other devices?
Is it possible to start proc2.exe a second before proc1.exe or does proc2.exe depend on an already running proc1.exe for a successful start?
Can be more instances of proc1.exe and/or proc2.exe already running before the batch file starts just proc2.exe or both applications?
Is it possible that even more instances of proc1.exe or proc2.exe are started by a user or any other process while the batch file observes the one or two processes started during execution of the batch file.
Is it really okay forcing a brutal kill of all running instances of proc2.exe by the operating system by using TASKKILL with the options /F /IM proc2.exe giving none of the running proc2 processes the chance to gracefully terminate with closing connections, finishing all read/write operations, saving unsaved data and closing files?
Let me assume the answers on these seven questions are as follows:
The batch file always starts proc1.exe.
It is unknown what proc1.exe is and what it does.
It is unknown what proc2.exe is and what it does.
Yes, proc2.exe starts also successful on proc1.exe not already running.
Yes, that is indeed possible.
Yes, that is possible, too.
No, that is not okay. proc2.exe should close itself and should not be killed by the OS.
In this case can be used the following commented batch file:
@echo off
setlocal EnableExtensions DisableDelayedExpansion
rem Define the two programs to run which can be even twice the same program.
set "ProgStart_FullName=%SystemRoot%\Notepad.exe"
set "ProgWait_FullName=%SystemRoot%\Notepad.exe"
rem Get just the file name of the first program with file extension.
for %%I in ("%ProgStart_FullName%") do set "ProgStart_FileName=%%~nxI"
rem Delete all environment variables of which name starts with
rem #PID_ in the local environment of this batch file context.
for /F "delims==" %%I in ('set #PID_ 2^>nul') do "set %%I="
rem Get all process identifiers of all already running processes
rem of the executable which is started next by the the batch file.
for /F "tokens=2" %%I in ('%SystemRoot%\System32\tasklist.exe /FI "IMAGENAME eq %ProgStart_FileName%" /NH') do set "#PID_%%I=1"
rem Start the program which should run as separate process.
start "" "%ProgStart_FullName%"
rem Find out the process identifier of just started program. There are
rem hopefully not started two instances of this program at the same time.
for /F "tokens=2" %%I in ('%SystemRoot%\System32\tasklist.exe /FI "IMAGENAME eq %ProgStart_FileName%" /NH') do if not defined #PID_%%I set "#PID_OBSERVE=%%I" & goto StartNext
echo ERROR: Failed to start "%ProgStart_FullName%"!& echo(& pause & goto EndBatch
:StartNext
rem Start the next program and wait for its self-termination.
"%ProgWait_FullName%"
rem Check if the first started program is still running and send it in
rem this case the WM_CLOSE message for a graceful self-termination giving
rem the process the chance to close all connections, finish all file and
rem registry accesses with saving all unsaved data and close all files.
%SystemRoot%\System32\tasklist.exe /FI "PID eq %#PID_OBSERVE%" | %SystemRoot%\System32\find.exe /I "%ProgStart_FileName%" >nul || goto EndBatch
%SystemRoot%\System32\taskkill.exe /PID %#PID_OBSERVE% >nul 2>nul
rem Wait five seconds for the self-termination of the first started program.
rem A Windows GUI program should get even more time, especially if a user uses
rem that GUI program and the program asks the user if unsaved data should be
rem saved before exiting. How much time to wait depends on the application.
echo Wait for exit of "%ProgStart_FileName%" with PID %#PID_OBSERVE%" ...
set "LoopCount=5"
:WaitLoop
%SystemRoot%\System32\timeout.exe /T 1 /NOBREAK >nul
%SystemRoot%\System32\tasklist.exe /FI "PID eq %#PID_OBSERVE%" | %SystemRoot%\System32\find.exe /I "%ProgStart_FileName%" >nul || goto EndBatch
set /A LoopCount-=5
if not %LoopCount% == 0 goto WaitLoop
rem Force a brutal kill of the first started program by the operating system.
%SystemRoot%\System32\taskkill.exe /F /PID %#PID_OBSERVE% >nul 2>nul
:EndBatch
endlocal
This batch file demonstrates the process management by starting two instances of Windows Notepad. A user can start other Notepad instances before running the batch file and can also start even more Notepad processes while the two instances of Notepad started by the Windows Command Processor during processing of the batch file are still running and wait for user actions. The user can close the first batch started instance and later the second batch started instance of Notepad, but the opposite is also possible. If the user entered text into new file of first batch started instance without saving that text and closes first the second batch started instance, the first batch started instance of Notepad prompts the user if the unsaved text should be saved now. The user has five seconds time for the choice as otherwise the batch file runs TASKKILL with option /F to force a kill of the first batch started Notepad resulting in a loss of the input text.
The batch file as is cannot be used for executables with a space in file name.
The batch file cannot be used as posted here if ProgStart_FullName is %CompSpec% or %SystemRoot%\System32\cmd.exe.
The environment variable ProgStart_FullName must be defined with the fully qualified file name of proc2.exe while the environment variable ProgWait_FullName must be defined with the fully qualified file name of proc1.exe. So, proc2.exe is started first as separate, parallel running process and next is started proc1.exe on which cmd.exe halts the batch file execution until proc1.exe exits itself. Then the batch file terminates also proc2.exe on still running or finally kills it if proc2.exe does not close itself within five seconds for whatever reason.
The task became more clear with the additional information added to the question.
The seven questions are answered as follows:
The batch file always starts proc1.exe.
The application proc1.exe is a Windows GUI application.
The application proc2.exe is a Windows console application.
proc2.exe must be started after proc1.exe.
There is neither proc1.exe nor proc2.exe started before batch file execution.
There are running never more than one proc1.exe one proc2.exe.
proc2.exe should close itself and should not be killed by the OS.
The commented batch file for this task with proc1.exe and proc2.exe in subdirectory bin of the batch file directory could be:
@echo off
setlocal EnableExtensions DisableDelayedExpansion
set "FullNameProcess1=%~dp0bin\proc1.exe"
set "FullNameProcess2=%~dp0bin\proc2.exe"
rem Get just the file name of the two programs with file extension.
for %%I in ("%FullNameProcess1%") do set "FileNameProcess1=%%~nxI"
for %%I in ("%FullNameProcess2%") do set "FileNameProcess2=%%~nxI"
rem Start the GUI program which should run as separate process in foreground.
start "" "%FullNameProcess1%" 2>nul
rem Could the first program not be started at all?
if errorlevel 9059 echo ERROR: Failed to start "%FullNameProcess1%"!& echo(& pause & goto EndBatch
rem Start the console program which should run as separate process
rem without opening a console window.
start "" /B "%FullNameProcess2%"
rem Define an endless running loop searching once per second in the
rem task list if the first started GUI application is still running.
:WaitLoop
%SystemRoot%\System32\timeout.exe /T 1 /NOBREAK >nul
%SystemRoot%\System32\tasklist.exe /FI "IMAGENAME eq %FileNameProcess1%" /NH | %SystemRoot%\System32\find.exe "%FileNameProcess1%" >nul && goto WaitLoop
rem The first started GUI program is not running anymore. Check now if the
rem console program is still running and if that is the case, send it the
rem message to close itself.
%SystemRoot%\System32\tasklist.exe /FI "IMAGENAME eq %FileNameProcess2%" /NH | %SystemRoot%\System32\find.exe "%FileNameProcess2%" >nul || goto EndBatch
%SystemRoot%\System32\taskkill.exe /IM "%FileNameProcess2%" >nul 2>nul
rem Wait one second and check if the console program really terminated itself.
rem Otherwise force a brutal kill of the console program by the operating system.
%SystemRoot%\System32\timeout.exe /T 1 /NOBREAK >nul
%SystemRoot%\System32\tasklist.exe /FI "IMAGENAME eq %FileNameProcess2%" /NH | %SystemRoot%\System32\find.exe "%FileNameProcess2%" >nul && %SystemRoot%\System32\taskkill.exe /F /IM "%FileNameProcess2%" >nul 2>nul
:EndBatch
endlocal
The batch file as is cannot be used for executables with a space in file name.
The batch file cannot be used if proc2.exe is in real cmd.exe because of taskkill.exe /IM "cmd.exe" results in termination of all running cmd processes including the one processing the batch file.
To understand the commands used and how they work, open a command prompt window, execute there the following commands, and read the displayed help pages for each command, entirely and carefully.
echo /?
endlocal /?
find /?
for /?
goto /?
if /?
pause /?
rem /?
set /?
setlocal /?
start /?
taskkill /?
tasklist /?
timeout /?
Read the Microsoft documentation about Using command redirection operators for an explanation of >nul and 2>nul and |. The redirection operators > and | must be escaped with caret character ^ on the FOR command lines to be interpreted as literal characters when Windows command interpreter processes these command lines before executing command FOR which executes the embedded command line with using a separate command process started in background with %ComSpec% /c and the command line within ' with ^ appended as additional arguments.
See also single line with multiple commands using Windows batch file for an explanation of unconditional command operator & and conditional command operator || and correct syntax for an IF condition with an ELSE branch. The usage of "%ERRORLEVEL%"=="0" is neither described by the usage help of command IF nor is it ever a good idea to use this string comparison for exit code evaluation in a batch file.
|
76383269 | 76384410 | I'm getting the following error when trying to compile my rust diesel project despite the diesel docs showing that this trait is implemented
the trait bound DateTime<Utc>: FromSql<diesel::sql_types::Timestamptz, Pg> is not satisfied
My schema.rs
pub mod offers {
diesel::table! {
offers.offers (id) {
id -> Int4,
#[max_length = 255]
offername -> Varchar,
#[max_length = 255]
offertypeid -> Nullable<Varchar>,
startdate -> Nullable<Timestamptz>,
enddate -> Nullable<Timestamptz>,
frequency -> Nullable<Int4>,
#[max_length = 255]
createdby -> Nullable<Varchar>,
createdAt -> Nullable<Timestamptz>
}
}
}
Here is my models.rs:
use diesel::prelude::*;
use chrono::{DateTime, Utc};
use crate::schema::offers::offers as offerTable;
#[derive(Queryable, Selectable)]
#[diesel(table_name = offerTable)]
#[diesel(check_for_backend(diesel::pg::Pg))]
pub struct Offer {
pub id: i32,
pub enddate: Option<DateTime<Utc>>,
pub createdAt: Option<DateTime<Utc>>,
pub createdby: Option<String>,
pub frequency: Option<i32>,
pub offername: String,
pub startdate: Option<DateTime<Utc>> <--- ERROR FOR THIS LINE
}
In the docs for diesel here is shows that this mapping should work and I haven't been able to figure out why it's not working
| the trait `FromSql` is not implemented for `DateTime` | You need to enable the "chrono" feature for the implementation for DateTime<UTC> from the chrono crate to be provided. This is shown as an annotation in the docs and is not enabled by default. You can read more about this feature and others in Diesel's crate feature flags section of the docs.
So your Cargo.toml should contain at least this:
[dependencies]
diesel = { version = "2.1.0", features = ["postgres", "chrono"] }
This goes for Diesel version 1.x as well.
|
76381239 | 76385356 | For min(ctz(x), ctz(y)), we can use ctz(x | y) to gain better performance. But what about max(ctz(x), ctz(y))?
ctz represents "count trailing zeros".
C++ version (Compiler Explorer)
#include <algorithm>
#include <bit>
#include <cstdint>
int32_t test2(uint64_t x, uint64_t y) {
return std::max(std::countr_zero(x), std::countr_zero(y));
}
Rust version (Compiler Explorer)
pub fn test2(x: u64, y: u64) -> u32 {
x.trailing_zeros().max(y.trailing_zeros())
}
| Is there a faster algorithm for max(ctz(x), ctz(y))? | These are equivalent:
max(ctz(a),ctz(b))
ctz((a|-a)&(b|-b))
ctz(a)+ctz(b)-ctz(a|b)
The math-identity ctz(a)+ctz(b)-ctz(a|b) requires 6 CPU instructions, parallelizable to 3 steps on a 3-way superscalar CPU:
3× ctz
1× bitwise-or
1× addition
1× subtraction
The bit-mashing ctz((a|-a)&(b|-b)) requires 6 CPU instructions, parallelizable to 4 steps on a 2-way superscalar CPU:
2× negation
2× bitwise-or
1× bitwize-and
1× ctz
The naïve max(ctz(a),ctz(b)) requires 5 CPU instructions, parallelizable to 4 steps on a 2-way superscalar CPU:
2× ctz
1× comparison
1× conditional branch
1× load/move (so that the "output" is always in the same register)
... but note that branch instructions can be very expensive.
If your CPU has a conditional load/move instruction, this reduces to 4 CPU instructions taking 3 super-scalar steps.
If your CPU has a max instruction (e.g. SSE4), this reduces to 3 CPU instructions taking 2 super-scalar steps.
All that said, the opportunities for super-scalar operation depend on which instructions you're trying to put against each other. Typically you get the most by putting different instructions in parallel, since they use different parts of the CPU (all at once). Typically there will be more "add" and "bitwise or" units than "ctz" units, so doing multiple ctz instructions may actually be the limiting factor, especially for the "math-identity" version.
If "compare and branch" is too expensive, you can make a non-branching "max" in 4 CPU instructions. Assuming A and B are positive integers:
C = A-B
subtract the previous carry, plus D, from D itself (D is now either 0 or -1, regardless of whatever value it previously held)
C &= D (C is now min(0, A-B))
A -= C (A' is now max(A,B))
|
76384813 | 76386149 | These two formulas are the same, except the first one is not an array formula and the second one is. How can the first formula be converted to an array formula? Getting circular logic when using the array formula.
Standard Formula (works fine, no circular logic):
=LET(a, A2, b, B2, c, C2, d, D1, e, E1,
dd, IF(a = 1, 0, d),
ee, IF(a = 1, 0, e),
(b * c + dd * ee ) / (b + dd) )
Array Formula (circular logic error):
=LET(a, A9:A12, b, B9:B12, c, C9:C12, d, D8:D11, e,E8:E11,
dd, IF(a = 1, 0, d),
ee, IF(a = 1, 0, e),
(b * c + dd * ee ) / (b + dd) )
The formula is a fairly simple weighted average of two sets of numbers. Trying to convert it to an array formula. The previous result is used on the following row, except on the first row where there is no previous result.
The difficulty is in how to reference the previous result cell when calculating the current cell.
Data:
Seq
B
C
D
1
100
1.00
-
2
100
3.00
800
3
250
2.00
200
4
400
5.00
300
| Using Office365 Excel array formulas, how to convert this standard formula? | INDEX each array and use SCAN to return the values:
=LET(
a, A9:A12,
b, B9:B12,
c, C9:C12,
d, D8:D11,
dd, IF(a = 1, 0, d),
SCAN(0,a,LAMBDA(z,y,(INDEX(b,y)*INDEX(c,y)+INDEX(dd,y)*z)/(INDEX(b,y)+INDEX(dd,y)))))
|
76383596 | 76384510 | Is there a way/function to calculate the proportion of each raster cell covered by a polygon? The polygons are usually larger than single cells and the landscape I'm working on is pretty big. I'll like to do it without converting the raster into cell-polygons and st_union/st_join, but I'm not sure if it's possible.
The output I'm looking for is a raster with cell values showing the proportion of each cell covered by the polygons layer.
| Area of each cell covered by polygons | Thanks for the comments.
At the end the terra::rasterize() function with the cover = T parameter applied on the polygons layer does exactly what I was looking for... and it's super fast.
I was able to keep it all on the "raster side" and avoid the more intense processing of vectorizing the raster template and doing intersects/spatial joins.
|
76381200 | 76385707 | I want the starting point 0% and the ending point 100% of the progress bar to be in the lower left corner. And the progress of the value changes to display normally. How can I accomplish it?
I want the progress bar to increase and decrease in value along the circle.
The result of my test is not correct, I don't know where is the problem and how to fix it.
Xaml:
<Window.Resources>
<local:AngleToPointConverter x:Key="prConverter"/>
<local:AngleToIsLargeConverter x:Key="isLargeConverter"/>
<Style x:Key="circularProgressBar" TargetType="local:CircularProgressBar">
<Setter Property="Value" Value="10"/>
<Setter Property="Maximum" Value="100"/>
<Setter Property="StrokeThickness" Value="10"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="local:CircularProgressBar">
<Canvas Width="100" Height="130">
<Ellipse Width="101" Height="101" Stroke="LightGray" Opacity="0.7" StrokeThickness="4" />
<Path Stroke="{TemplateBinding Background}"
StrokeThickness="{TemplateBinding StrokeThickness}">
<Path.Data>
<PathGeometry>
<PathFigure x:Name="fig" StartPoint="20,90">
<ArcSegment RotationAngle="0" SweepDirection="Clockwise"
Size="50,50"
Point="{Binding Path=Angle, Converter={StaticResource prConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}"
IsLargeArc="{Binding Path=Angle, Converter={StaticResource isLargeConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}"
>
</ArcSegment>
</PathFigure>
</PathGeometry>
</Path.Data>
</Path>
<Border Width="100" Height="100">
<Grid>
<Ellipse Width="50" Height="50" Fill="White" />
<Image Width="20" Height="20" Margin="30,25,30,30" Source="bulb1.PNG"/>
<TextBlock Width="50" Foreground="Black" Height="20" Margin="10,40,10,5" TextAlignment="Center"
Text="{Binding Path=Value, StringFormat={}{0}%,
RelativeSource={RelativeSource TemplatedParent}}"
FontSize="{TemplateBinding FontSize}"/>
</Grid>
</Border>
<Canvas Canvas.Top="110">
<Button x:Name="decrease" Margin="20,0,0,0" Command="{Binding DecreaseCommand}" >
<Button.Style>
<Style TargetType="{x:Type Button}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate>
<Grid>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" />
<Border Width="20" Height="20" >
<TextBlock Foreground="LightGray" Text="-" FontWeight="Bold" HorizontalAlignment="Center" VerticalAlignment="Center" />
</Border>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Button.Style>
</Button>
<Button x:Name="increase" Margin="60,0,0,0" Grid.Column="1" Command="{Binding IncreaseCommand}" >
<Button.Style>
<Style TargetType="{x:Type Button}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate>
<Grid>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" />
<Border Width="20" Height="20" Grid.Column="1">
<TextBlock Foreground="LightGray" Text="+" FontWeight="Bold" VerticalAlignment="Center" HorizontalAlignment="Center" />
</Border>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Button.Style>
</Button>
<!--<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" Margin="20,0,0,0"/>
<Border Width="20" Height="20" Margin="20,0,0,0">
<TextBlock Foreground="LightGray" Text="-" HorizontalAlignment="Center" />
</Border>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" Margin="60,0,0,0"/>
<Border Width="20" Height="20" Margin="60,0,0,0">
<TextBlock Foreground="LightGray" Text="+" HorizontalAlignment="Center" />
</Border>-->
</Canvas>
</Canvas>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Window.Resources>
<Grid Background="DarkBlue">
<local:CircularProgressBar Background="White" Style="{StaticResource circularProgressBar }"
Value="{Binding ElementName=CirularSlider, Path= Value}" Foreground="Black" FontWeight="Bold"
StrokeThickness="4"
BorderBrush="LightGray"/>
<Slider Minimum="0" Maximum="100"
x:Name="CirularSlider" IsSnapToTickEnabled="True"
VerticalAlignment="Top" Value="10"/>
</Grid>
Codebedhind:
public class CircularProgressBar : ProgressBar
{
public CircularProgressBar()
{
this.ValueChanged += CircularProgressBar_ValueChanged;
}
void CircularProgressBar_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
CircularProgressBar bar = sender as CircularProgressBar;
double currentAngle = bar.Angle;
double targetAngle = e.NewValue / bar.Maximum * 359.999;
// double targetAngle = e.NewValue / bar.Maximum * 179.999;
DoubleAnimation anim = new DoubleAnimation(currentAngle, targetAngle, TimeSpan.FromMilliseconds(500));
bar.BeginAnimation(CircularProgressBar.AngleProperty, anim, HandoffBehavior.SnapshotAndReplace);
}
public double Angle
{
get { return (double)GetValue(AngleProperty); }
set { SetValue(AngleProperty, value); }
}
// Using a DependencyProperty as the backing store for Angle. This enables animation, styling, binding, etc...
public static readonly DependencyProperty AngleProperty =
DependencyProperty.Register("Angle", typeof(double), typeof(CircularProgressBar), new PropertyMetadata(0.0));
public double StrokeThickness
{
get { return (double)GetValue(StrokeThicknessProperty); }
set { SetValue(StrokeThicknessProperty, value); }
}
// Using a DependencyProperty as the backing store for StrokeThickness. This enables animation, styling, binding, etc...
public static readonly DependencyProperty StrokeThicknessProperty =
DependencyProperty.Register("StrokeThickness", typeof(double), typeof(CircularProgressBar), new PropertyMetadata(10.0));
}
public class AngleToPointConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
double angle = (double)value;
double radius = 50;
double piang = angle * Math.PI / 180;
//double piang = angle * Math.PI / 310;
double px = Math.Sin(piang) * radius + radius;
double py = -Math.Cos(piang) * radius + radius;
return new System.Windows.Point(px, py);
}
public object ConvertBack(object value, Type targetTypes, object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
public class AngleToIsLargeConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
double angle = (double)value;
return angle > 180;
// return angle > 300;
}
public object ConvertBack(object value, Type targetTypes, object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
The result:
Edit:
Update:
How can I change the progress value by dragging the ball?
<Style x:Key="circularProgressBar1" TargetType="local:CircularProgressBar">
<Setter Property="Value" Value="10"/>
<Setter Property="Maximum" Value="100"/>
<Setter Property="StrokeThickness" Value="7"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="local:CircularProgressBar">
<Canvas Width="100" Height="130">
<Ellipse Width="105" Height="104" Margin="-2.4,-1.5,0,0" Stroke="LightGray" Opacity="0.7" StrokeThickness="8" />
<Path Stroke="{TemplateBinding Background}" StrokeStartLineCap="Round" StrokeEndLineCap="Round"
StrokeThickness="{TemplateBinding StrokeThickness}">
<Path.Data>
<PathGeometry>
<PathFigure x:Name="fig" StartPoint="20,90">
<ArcSegment RotationAngle="0" SweepDirection="Clockwise"
Size="50,50"
Point="{Binding Path=Angle, Converter={StaticResource prConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}"
IsLargeArc="{Binding Path=Angle, Converter={StaticResource isLargeConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}"
>
</ArcSegment>
</PathFigure>
</PathGeometry>
</Path.Data>
</Path>
<Button>
<Button.Style>
<Style TargetType="Button">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate>
<Path Stroke="Black" StrokeThickness="10" StrokeStartLineCap="Round" StrokeEndLineCap="Round">
<Path.Data>
<PathGeometry>
<PathGeometry.Figures>
<PathFigure StartPoint="{Binding Path=Angle, Converter={StaticResource prConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}">
<PathFigure.Segments>
<LineSegment Point="{Binding Path=Angle, Converter={StaticResource prConverter}, RelativeSource={RelativeSource FindAncestor, AncestorType=ProgressBar}}" />
</PathFigure.Segments>
</PathFigure>
</PathGeometry.Figures>
</PathGeometry>
</Path.Data>
</Path>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Button.Style>
</Button>
<Border Width="100" Height="100">
<Grid>
<Ellipse Width="50" Height="50" Fill="White" />
<Image Width="20" Height="20" Margin="30,25,30,30" Source="bulb1.PNG"/>
<TextBlock Width="50" Foreground="Black" Height="20" Margin="10,40,10,5" TextAlignment="Center"
Text="{Binding Path=Value, StringFormat={}{0}%,
RelativeSource={RelativeSource TemplatedParent}}"
FontSize="{TemplateBinding FontSize}"/>
</Grid>
</Border>
<Canvas Canvas.Top="110">
<Button x:Name="decrease" Margin="20,0,0,0" Command="{Binding DecreaseCommand}" >
<Button.Style>
<Style TargetType="{x:Type Button}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate>
<Grid>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" />
<Border Width="20" Height="20" >
<TextBlock Foreground="LightGray" Text="-" FontWeight="Bold" HorizontalAlignment="Center" VerticalAlignment="Center" />
</Border>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Button.Style>
</Button>
<Button x:Name="increase" Margin="60,0,0,0" Grid.Column="1" Command="{Binding IncreaseCommand}" >
<Button.Style>
<Style TargetType="{x:Type Button}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate>
<Grid>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" />
<Border Width="20" Height="20" Grid.Column="1">
<TextBlock Foreground="LightGray" Text="+" FontWeight="Bold" VerticalAlignment="Center" HorizontalAlignment="Center" />
</Border>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</Button.Style>
</Button>
<!--<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" Margin="20,0,0,0"/>
<Border Width="20" Height="20" Margin="20,0,0,0">
<TextBlock Foreground="LightGray" Text="-" HorizontalAlignment="Center" />
</Border>
<Ellipse Width="20" Height="20" Stroke="LightGray" StrokeThickness="1" Margin="60,0,0,0"/>
<Border Width="20" Height="20" Margin="60,0,0,0">
<TextBlock Foreground="LightGray" Text="+" HorizontalAlignment="Center" />
</Border>-->
</Canvas>
</Canvas>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
| How to create Custom progress bar minimum value 0 starts from the bottom left corner? | change the piang calculation in the convert function so that the starting point at the bottom left is taken into account in the calculation
double piang = (angle - 143.2) * Math.PI / 180;
so the class looks like this
public class AngleToPointConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
double angle = (double)value;
double radius = 50;
double piang = (angle - 143.2) * Math.PI / 180;
//double piang = angle * Math.PI / 310;
double px = Math.Sin(piang) * radius + radius;
double py = -Math.Cos(piang) * radius + radius;
return new System.Windows.Point(px, py);
}
public object ConvertBack(object value, Type targetTypes, object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
you may have to adjust the angle (143.2).
|
76384224 | 76386230 | Good morning! I have a customer ID field included in the table DAC. I would like for this selector to operate identically to the customer ID field in the sales order form. I used all pertinent DAC code from the sales order to create the field in my custom screen; however, when I attempt to use all the same attributes for the restrictor, the following attribute cannot be accessed. I have all the appropriate references included in the project(PX.Objects.AR). Any assistance or work around for this issue would be greatly appreciated. Thank you!
#region CustomerID
public abstract class customerID : BqlInt.Field<customerID>
{
public class PreventEditBAccountCOrgBAccountID<TGraph> :
PreventEditBAccountRestrictToBase<BAccount.cOrgBAccountID, TGraph, NXBOL,
SelectFrom<NXBOL>
.Where<NXBOL.bolType.IsNotEqual<NXBOLType.nonProductMovement>.
And<NXBOL.customerID.IsEqual<BAccount.bAccountID.FromCurrent>>>>
where TGraph : PXGraph
{
protected override string GetErrorMessage(BAccount baccount, NXBOL document, string documentBaseCurrency)
{
return PXMessages.LocalizeFormatNoPrefix(Messages.CannotChangeRestricToIfShipmentExists,
documentBaseCurrency, baccount.AcctCD, document.BOLNbr);
}
}
public class PreventEditBAccountCOrgBAccountIDOnVendorMaint : PreventEditBAccountCOrgBAccountID<VendorMaint>
{
public static bool IsActive()
=> PXAccess.FeatureInstalled<FeaturesSet.multipleBaseCurrencies>();
}
public class PreventEditBAccountCOrgBAccountIDOnCustomerMaint : PreventEditBAccountCOrgBAccountID<CustomerMaint>
{
public static bool IsActive()
=> PXAccess.FeatureInstalled<FeaturesSet.multipleBaseCurrencies>();
}
}
protected Int32? _CustomerID;
[CustomerActive(
typeof(Search<BAccountR.bAccountID, Where<True, Equal<True>>>), // TODO: remove fake Where after AC-101187
Visibility = PXUIVisibility.SelectorVisible, Required = true)]
[CustomerOrOrganizationInNoUpdateDocRestrictor]
[PXForeignReference(typeof(Field<NXBOL.customerID>.IsRelatedTo<BAccount.bAccountID>))]
public virtual Int32? CustomerID
{
get
{
return this._CustomerID;
}
set
{
this._CustomerID = value;
}
}
#endregion
It appears that the [CustomerOrOrganizationInNoUpdateDocRestrictor] is not accessible. The project will not build and receive the following error:
Error CS0122 'CustomerOrOrganizationInNoUpdateDocRestrictor' is inaccessible due to its protection level
I have tried manipulating references, I would expect the attribute which is defined in PX.Objects.AR to work the same in my project as the PX.Objects.SO.
| Why is 'CustomerOrOrganizationInNoUpdateDocRestrictor' inaccessible in my Acumatica project? | CustomerOrOrganizationInNoUpdateDocRestrictor is an internal class, so you can't access it.
You can use this restrictor instead:
[PXRestrictor(
typeof(Where<Customer.type, IsNotNull, Or<Current<PX.Objects.SO.SOOrder.aRDocType>,
Equal<ARDocType.noUpdate>, And<Current<PX.Objects.SO.SOOrder.behavior>, Equal<SOBehavior.tR>,
And<Where<BAccountR.type, In3<BAccountType.branchType, BAccountType.organizationType>,
Or<PX.Objects.CR.BAccount.isBranch, Equal<True>>>>>>>),
"Only a customer or company business account can be specified.")]
|
76383243 | 76384553 | I have an operation that I want to succeed with two different conditions. For example, if the status code is 501 OR message is not 'FAILED'. Is there a way to have assertions grouped together logically like AND/OR. If assertion 1 passes OR assertion 2 passes, I want my test case to succeed.
| Chain assertions together in kotlin | @cactustictacs suggests "naming" your assertions and then chaining them.
I'll suggest an answer by showing code that validates 4x REST interface inputs that have some complex permutations that allowed / not allowed. This code is arguably easier to read than a classic if ... else structure.
See how they are evaluated using the when construct. Perhaps you can take these ideas for your assertions...
val userIdNotNull = userId != null
&& channelType == null
&& primaryMentorId == null
&& primaryClinicianId == null
val userIdAndChannelTypeNotNull = userId != null
&& channelType != null
&& primaryMentorId == null
&& primaryClinicianId == null
val primaryMentorIdNotNull = userId == null
&& channelType == null
&& primaryMentorId != null
&& primaryClinicianId == null
val primaryClinicianIdNotNull = userId == null
&& channelType == null
&& primaryMentorId == null
&& primaryClinicianId != null
val channels = when {
userIdNotNull -> getChannelsByUserId(userId)
userIdAndChannelTypeNotNull -> channelRepository.findByMemberIdAndChannelType(userId!!, channelType!!)
.ifEmpty { throw NotFoundException() }
primaryMentorIdNotNull -> channelRepository.findByPrimaryMentorId(primaryMentorId)
.ifEmpty { throw NotFoundException() }
primaryClinicianIdNotNull -> channelRepository.findByPrimaryClinicianId(primaryClinicianId)
.ifEmpty { throw NotFoundException() }
else -> throw InvalidRequestParameterException("This combination of request parameters is not supported")
}
|
76381084 | 76386067 | I've merged a 3D surface pressure field (ERA5, converted from Pa to hPa, function of lat,lon and time) with a 4D variable which is also a function of pressure levels (lat,lon,time,level).
So, my netcdf file has two fields, Temperature which is 4D:
float t(time, level, latitude, longitude)
surface pressure, which is 3d:
float sp(time, latitude, longitude)
The pressure dimension "level" is of course a vector:
int level(level)
What I want to do is make a mask for temperature for all locations where the pressure exceeds the surface pressure.
I know how to use nco to make a mask using a simple threshold:
ncap2 -s 'mask=(level>800)' t_ps.nc mask.nc
But of course when I try to use the surface pressure
ncap2 -s 'mask=(level>sp)' t_ps.nc mask.nc
I get the error
ncap2: ERROR level and template sp share no dimensions
I think what I need to do is make a new variable like "level3d" which duplicates the pressure "level" to be a function of lat and lon, which I can then use to efficiently make the mask, yes? But I'm not sure how to do this with a dimension (I thought about cdo enlarge but couldn't get it to work).
By the way, instead of posting the data, this is the python api script I used to retrieve it
import cdsapi
c = cdsapi.Client()
c.retrieve(
'reanalysis-era5-single-levels-monthly-means',
{
'format': 'netcdf',
'product_type': 'monthly_averaged_reanalysis',
'variable': 'surface_pressure',
'year': '2020',
'month': '03',
'time': '00:00',
},
'ps.nc')
c.retrieve(
'reanalysis-era5-pressure-levels-monthly-means',
{
'format': 'netcdf',
'product_type': 'monthly_averaged_reanalysis',
'variable': 'temperature',
'pressure_level': [
'1', '2', '3',
'5', '7', '10',
'20', '30', '50',
'70', '100', '125',
'150', '175', '200',
'225', '250', '300',
'350', '400', '450',
'500', '550', '600',
'650', '700', '750',
'775', '800', '825',
'850', '875', '900',
'925', '950', '975',
'1000',
],
'year': '2020',
'month': '03',
'time': '00:00',
},
't.nc')
| use surface pressure to mask 4D netcdf variable | Your diagnosis of the NCO behavior is essentially correct. The "broadcast"
ncap2 -s 'mask=(level>sp)' t_ps.nc mask.nc
fails because level and sp are arrays (not scalars) that share no dimensions. The fix would be to create and use a temporary 3D version of level with something like
ncap2 -s 'level_3D[level,latitude,longitude]=level;mask=(level_3D>sp)' t_ps.nc mask.nc
|
76385355 | 76386251 | I am making graphics on excel from python using xls writer and want to make a graphic with green colored bars for positive values, and red for negative.
Current code seems like this:
chart3 = workbook.add_chart({'type': 'column'})
chart3.add_series({
'values': '=Summary!$W$2:$W$76',
'categories': '=Summary!$A$2:$A$76',
'gap': 4,
'line': {'width': 1},
'name': '=Summary!$W$1',
'fill': {'color': 'green'},
'invert_if_negative': True
})
It differientiates positive and negative values in the graph but the negative ones are just no-colored. Is there a way to make the inverted color to be red?
Already tried with other properties like inverted_color or any syntax like that but does not work
| How to set invert_if_negative to fill bars to a solid color in python xlswriter | You will need version >= 3.1.1 of XlsxWriter which supports the invert_if_negative_color parameter:
from xlsxwriter import Workbook
workbook = Workbook("chart.xlsx")
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({"type": "column"})
worksheet.write_column("A1", [3, 2, -3, 4, -2])
chart.add_series(
{
"values": "=Sheet1!$A$1:$A$5",
"fill": {"color": "green"},
"invert_if_negative": True,
"invert_if_negative_color": "red",
}
)
worksheet.insert_chart("C1", chart)
workbook.close()
Output:
|
76380701 | 76386331 | I'm trying to do drop caps in the Twenty Twenty-Three theme on WordPress 6.2.2.
All the docs I find when I google it are for older versions of WordPress, and possibly on an older theme. It used to be easy, but I can't find relevant docs for how to do this with the Twenty Twenty-Three theme.
And following on from that, how do I add custom CSS to use a different font for the drop caps?
I have a couple of older posts from an earlier version of WP that have drop caps and I used to have them styled via a child theme I was using, but I upgraded to the Twenty Twenty Three theme and I lost all my customisations.
I've added the following code via the "Tools > Theme file editor", but it doesn't seem to be working.
p.has-drop-cap:not(:focus)::first-letter
{
font-family: 'Fredericka the Great', cursive;
}
| Custom CSS drop caps in Wordpress 6.2.2 with Twenty Twenty Three theme | Neither TwentyTwentyTwo nor TwentyTwentyThree currently support dropCaps. Since the layout looks undesirable on certain user systems, it was agreed that dropcap support is not mandatory for either theme. Read more - WordPress issues: https://github.com/WordPress/twentytwentytwo/issues/180
But there's a workaround available. Since you have to touch the theme core files, the use of a child theme is probably not a bad idea. Otherwise, adjustments could be overwritten when updating.
The workaround was first pointed out in a comment by @colorful-tones a user on GitHub, in this thread. The related CSS is from @justintadlock, another GitHub user, you can read more here.
So here are the steps you need to take to enable dropCap support:
Since you are using the theme file editor, go there and open theme.json.
At about line 109, under typography, change the value for dropCap from false to true.
Save the file.
Open the theme's style.css and add:
.has-drop-cap:not(:focus)::first-letter {
font-family: var( --wp--custom--drop-cap--typography--font-family, inherit );
font-size: var( --wp--custom--drop-cap--typography--font-size, 5.5em );
font-weight: var( --wp--custom--drop-cap--typography--font-weight, 700 );
font-style: var( --wp--custom--drop-cap--typography--font-style, normal );
line-height: var( --wp--custom--drop-cap--typography--line-height, .85 );
margin: var( --wp--custom--drop-cap--spacing--margin, 0.05em 0.1em 0 0 );
padding: var( --wp--custom--drop-cap--spacing--paddig, 0 );
}
Save the file.
NOTE: If you want to use a custom font you may have to add your font to the typography section in theme.json. A support topic from WordPress.org could be helpful hereby. You can also try to replace all variables directly in the CSS part with your own values. But I'm sorry I can't remember if it worked like that, because I used this workaround only once and it was some time ago and the page doesn't exist like that anymore. You'll just have to test it yourself.
Finally, don't forget to properly include your Fredericka the Great font into Wordpress.
Hope this works for you.
|
76383382 | 76384575 | The following code generates this image
I want the "y"-axis label to be "Space" and the "x"-axis label to be "time" for the left subplot. However, I am failing to achieve this. Why does my plotting code not do as I desire?
p1 = surface(sol.t, x, z, xlabel="Time", ylabel="Space", zlabel="|u|²", colorbar = false)
p2 = contourf(sol.t,x,z, xlabel="Time", ylabel="Space")
plt = plot(p1,p2,layout=(1,2), size=(1200,800))
using DifferentialEquations, LinearAlgebra, Plots, SparseArrays
plotlyjs()
N₁=31 # Number of waveguides / size of solution vector
γ=1 # Nonlinear term strength parameter
h=1 # Grid spacing
centerGrid = (N₁-1)/2;
x = -centerGrid:centerGrid;
# Coefficient matrix of second-order centered-difference operator (δ²u)ₙ
M = spdiagm(-1 => fill(1,N₁-1), 0 => fill(-2,N₁), 1 => fill(1,N₁-1))
M[N₁,1] = 1; # Periodic boundary conditions
M[1,N₁] = 1;
# RHS of DNLS. The solution vector u is a N₁x1 complex vector
g₁(u,p,t) = 1*im*(p[1]*M*u + @.(γ*((abs(u))^2).*u) )
# Julia is explicitly typed (e.g, cannot have Int and Complex in same array) and so we must convert the object containing the initial data to be complex
u0 = Complex.(sech.(x))
tspan = (0.0,200)
prob = ODEProblem(g₁,u0,tspan, [h])
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8)
z= [abs(sol.u[i][j])^2 for j=1:N₁, i=1:size(sol)[2]] # |u|²
p1 = surface(sol.t, x, z, xlabel="Time", ylabel="Space", zlabel="|u|²", colorbar = false)
p2 = contourf(sol.t,x,z, xlabel="Time", ylabel="Space")
plt = plot(p1,p2,layout=(1,2), size=(1200,800))
| Axis labeling for subplots | Setting custom axis labels for 3d plots doesn't work with Plots and plotlyjs() backend. Only with GR backend your labels are displayed.
You can try this version using PLotlyJS.jl instead Plots.jl:
fig=make_subplots(rows=1, cols=2, specs =[Spec(kind="scene") Spec(kind="xy")],
horizontal_spacing=-0.1, column_widths=[0.65, 0.35])
add_trace!(fig, PlotlyJS.surface(x=sol.t, y=collect(x), z=z', showscale=false), row=1, col=1)
add_trace!(fig, PlotlyJS.contour(x=sol.t, y=collect(x), z=z), row=1, col=2)
relayout!(fig, template=templates["plotly_white"], font_size=11,
width=1000, height=600, scene=attr(xaxis_title="Time", yaxis_title="Space",
zaxis_title="|u|²", camera_eye=attr(x=1.8, y=1.8, z=1)),
xaxis2_title="Time", yaxis2_title="Space", margin_l=15)
display(fig)
|
76381599 | 76386720 | I have created a website and included identity for logging in. On the manage your account page, the new email box keeps autopopulating and I can't figure out how to stop it.
I have tried to add the 'autocomplete=off' to the tag (see below code) but it still populates.
@page
@using FarmersPortal.Areas.Identity.Pages.Account.Manage;
@model EmailModel
@{
ViewData["Title"] = "Manage Email";
ViewData["ActivePage"] = ManageNavPages.Email;
}
<style>
body {
background-image: url('http://10.48.1.215/PORTAL/hero-range-1.jpg');
height: 100%;
background-position: center;
background-repeat: no-repeat;
background-size: cover;
/* background-color: white;*/
}
</style>
<h3 style="color:white">@ViewData["Title"]</h3>
<partial name="_StatusMessage" for="StatusMessage" />
<div class="row">
<div class="col-md-6">
<form id="email-form" method="post">
<div asp-validation-summary="All" class="text-danger"></div>
<div class="form-floating input-group">
<input asp-for="Email" class="form-control" disabled />
<div class="input-group-append">
<span class="h-100 input-group-text text-success font-weight-bold">✓</span>
</div>
<label asp-for="Email" class="form-label"></label>
</div>
<div class="form-floating">
<input asp-for="Input.NewEmail" class="form-control" autocomplete="off" aria-required="true" />
<label asp-for="Input.NewEmail" class="form-label"></label>
<span asp-validation-for="Input.NewEmail" class="text-danger"></span>
</div>
<button id="change-email-button" type="submit" asp-page-handler="ChangeEmail" class="w-100 btn btn-lg btn-primary">Change email</button>
</form>
</div>
</div>
@section Scripts {
<partial name="_ValidationScriptsPartial" />
}
| 'newEmail' box that comes with identity is autopopulating and I can't stop it | asp-for sets the id, name and validation related attributes, and it also sets the value of the input element if there is already a value within the model passed to the view.
From your code, You are using:
<input asp-for="Input.NewEmail" class="form-control" autocomplete="off" aria-required="true" />
to input the value of Input.NewEmail, I think before you render this view, Input.NewEmail already has the value, asp-for will get this value and set value="xxx" attribute in input tag.
So if you don't want show the value, You can just use name property instead of asp-for, Change your code to:
<input name="Input.NewEmail" class="form-control" autocomplete="off" aria-required="true" />
Then when render this view, Input tag will show nothing.
|
76384911 | 76386287 | I have a situation where I use several sliders on the page due to which the actual height of the page changes
Let's say my html height is 600px but due to some sliders the actual page height is 1000px
And because of this, when I try to stick the footer to the bottom using position: absolute and bottom: 0, I have it placed at the end of the html height
I used an example to show how everything looks like for me
If I use position: relative then on other pages where the height is small, it will not be at the bottom
How can I stick the footer to the bottom of the page in this case?
I have also tried wrapping the entire html content in a class .wrapper { height: 100%; display: flex; flex-direction: column; } and for the footer use position: relative and margin-top: auto
This kind of helped, but then there were problems with the blocks that come after the html, they lost their width
html {
height: 500px;
}
.main-content {
padding: 200px;
text-align: center;
}
.content {
padding: 300px;
text-align: center;
}
.footer {
padding: 40px 0;
position: absolute;
width: 100%;
bottom: 0;
text-align: center;
background: gray;
}
<html>
<div class="main-content"> MAIN CONTENT</div>
<div class="content">CONTENT</div>
<footer class="footer">FOOTER</footer>
</html>
| stick footer to bottom if actual page height is greater than html height | You can use flexbox. Uncomment height property in body to see the changes.
Check the elements html, body, main and footer in the code below.
Resources:
CSS Tricks | Flexbox
CSS Tricks | Flexbox and Auto Margins
Dev | Stick Footer to The Bottom of The Page
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html, body {
height: 100%;
}
body {
display: flex;
flex-direction: column;
/*height: 1000px;*/
}
header {
height: 50px;
background-color: cyan;
}
main {
flex: 1;
}
footer {
margin-top: auto;
height: 70px;
background-color: red;
}
<html>
<body>
<header>Header</header>
<main>Main</main>
<footer>Footer</footer>
</body>
</html>
|
76387119 | 76387120 | In swift there is the guard let / if let pattern allowing us to declare an object or a property only if it can be unwrapped.
it works a follow:
func getMeaningOfLife() -> Int? {
42
}
func printMeaningOfLife() {
if let name = getMeaningOfLife() {
print(name)
}
}
func printMeaningOfLife() {
guard let name = getMeaningOfLife() else {
return
}
print(name)
}
My question here is: Is there a Java version of it ?
| guard/if let in Java - declare a property or an object if it can be unwrapped | The answer is No.
Apparently this syntax also exists in Clojure and according to this Stack Overflow answer there is no way to declare a property if it can be unwrapped in Java.
|
76387153 | 76387187 | I have a situation I am writing a powershell GUI.
I need to take the password in a secure input.. How can I do it ?
The passwd input need to be secured in the window
below is my code
$Passwd = New-Object system.Windows.Forms.TextBox
$Passwd.multiline = $false
$Passwd.width = 150
$Passwd.height = 20
$Passwd.location = New-Object System.Drawing.Point(169,26)
$Passwd.Font = New-Object System.Drawing.Font('Microsoft Sans Serif',10)
$Passwd.ForeColor = [System.Drawing.ColorTranslator]::FromHtml("#7ed321")
$Passwd.BackColor = [System.Drawing.ColorTranslator]::FromHtml("#000000")
this is the code
the passwd input nee to be secured input ?
| Powershell GUI, How to take password input without exposing | You need to set UseSystemPasswordChar to $true:
$Passwd.UseSystemPasswordChar = $true
|
76384259 | 76386348 | Is there a way to achieve such kind of alignment of numbers in multiple strings, preferably in interface builder? Please see the attached screenshot.
| String text alignment by decimal point (Swift) | One approach to achieve this is by using a table view and programmatically adding constraints to align the separator symbol. This method offers scalability as it only includes elements visible on the screen. An interesting aspect of this approach is that the separator's position may change based on the largest offset currently on the screen. Whether this behavior is desired or not depends on your specific requirements.
In this solution, I suggest splitting the string into three components and placing them into three separate labels: one for the content before the separator, one for the separator itself, and another for the content after the separator. Additionally, create an invisible reference view at the top level, which will be used to connect the separator label.
Although the following code is implemented programmatically for clarity in understanding constraint connections, it is recommended to move most of the code into the storyboard for better organization and maintainability.
I hope this code snippet helps you solve your problem.
class OffsetNumberViewController: UIViewController {
var values: [NSDecimalNumber] = [] {
didSet {
tableView.reloadData()
}
}
private lazy var tableView: UITableView = {
let view = UITableView()
view.delegate = self
view.dataSource = self
return view
}()
private lazy var numberFormatter: NumberFormatter = {
let formatter = NumberFormatter()
formatter.decimalSeparator = "."
formatter.usesGroupingSeparator = true
formatter.groupingSeparator = ","
formatter.groupingSize = 3
formatter.maximumFractionDigits = 5
return formatter
}()
private lazy var referenceView = {
let view = UIView()
view.isHidden = true
self.view.addSubview(view)
view.translatesAutoresizingMaskIntoConstraints = false
self.view.addConstraints([
.init(item: view, attribute: .trailing, relatedBy: .lessThanOrEqual, toItem: self.view, attribute: .trailing, multiplier: 1.0, constant: 0.0),
.init(item: view, attribute: .top, relatedBy: .equal, toItem: self.view, attribute: .top, multiplier: 1.0, constant: 0.0),
.init(item: view, attribute: .bottom, relatedBy: .equal, toItem: self.view, attribute: .bottom, multiplier: 1.0, constant: 0.0)
])
return view
}()
override func viewDidLoad() {
super.viewDidLoad()
tableView.register(NumberTableViewCell.self, forCellReuseIdentifier: "amountCell")
tableView.frame = view.bounds
view.addSubview(tableView)
values = generateRandomValues(count: 1000)
}
private func generateRandomValues(count: Int) -> [NSDecimalNumber] {
(0..<count).map { index in
let startValue: Int = 1234567890
let maximumDivision = 5
let randomDivision: Int = 1<<Int.random(in: 0...maximumDivision)
return .init(integerLiteral: startValue).dividing(by: .init(integerLiteral: randomDivision))
}
}
private func formatValue(_ value: NSDecimalNumber) -> String {
numberFormatter.string(for: value) ?? "NaN"
}
}
// MARK: - UITableViewDelegate, UITableViewDataSource
extension OffsetNumberViewController: UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
values.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
if let cell = tableView.dequeueReusableCell(withIdentifier: "amountCell", for: indexPath) as? NumberTableViewCell {
cell.setup(withNumberAsString: formatValue(values[indexPath.row]), decimalSeparator: numberFormatter.decimalSeparator)
return cell
} else {
return UITableViewCell()
}
}
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
(cell as? NumberTableViewCell)?.attachCenterTo(referenceView, parent: self.view)
}
func tableView(_ tableView: UITableView, didEndDisplaying cell: UITableViewCell, forRowAt indexPath: IndexPath) {
(cell as? NumberTableViewCell)?.detachExternalConstraints()
}
}
// MARK: - NumberTableViewCell
extension OffsetNumberViewController {
class NumberTableViewCell: UITableViewCell {
lazy private var leftSideLabel: UILabel = UILabel()
lazy private var rightSideLabel: UILabel = UILabel()
lazy private var separatorLabel: UILabel = UILabel()
private var currentExternalConstraints: [NSLayoutConstraint] = []
lazy private var stackView: UIStackView = {
let stackView = UIStackView()
stackView.translatesAutoresizingMaskIntoConstraints = false
stackView.alignment = .fill
stackView.axis = .horizontal
stackView.distribution = .fill
stackView.addArrangedSubview(leftSideLabel)
stackView.addArrangedSubview(separatorLabel)
stackView.addArrangedSubview(rightSideLabel)
addSubview(stackView)
addConstraints([
.init(item: stackView, attribute: .right, relatedBy: .equal, toItem: self, attribute: .right, multiplier: 1.0, constant: -12.0),
.init(item: stackView, attribute: .top, relatedBy: .equal, toItem: self, attribute: .top, multiplier: 1.0, constant: 0.0),
.init(item: stackView, attribute: .bottom, relatedBy: .equal, toItem: self, attribute: .bottom, multiplier: 1.0, constant: 0.0),
])
return stackView
}()
func setup(withNumberAsString numberString: String, decimalSeparator: String) {
let components = numberString.components(separatedBy: decimalSeparator)
let _ = stackView
separatorLabel.text = decimalSeparator
if components.count == 1 {
leftSideLabel.text = components[0]
rightSideLabel.text = ""
separatorLabel.alpha = 0
} else if components.count == 2 {
leftSideLabel.text = components[0]
rightSideLabel.text = components[1]
separatorLabel.alpha = 1
} else {
// Something went wrong
leftSideLabel.text = ""
rightSideLabel.text = "error"
separatorLabel.alpha = 0
}
}
func detachExternalConstraints() {
currentExternalConstraints.forEach { constrain in
constrain.isActive = false
}
currentExternalConstraints = []
}
func attachCenterTo(_ referenceView: UIView, parent: UIView) {
currentExternalConstraints = [
.init(item: separatorLabel,
attribute: .centerX,
relatedBy: .greaterThanOrEqual,
toItem: referenceView,
attribute: .centerX,
multiplier: 1.0,
constant: 0.0),
.init(item: referenceView,
attribute: .centerX,
relatedBy: .greaterThanOrEqual,
toItem: separatorLabel,
attribute: .centerX,
multiplier: 1.0,
constant: 0.0)
]
parent.addConstraints(currentExternalConstraints)
}
}
}
|
76385322 | 76386379 | I am developing 2 plugins for Banno and have a hosting question.
My client will be hosting the plugins on S3. Can I use 1 bucket for both plugins or will they each need a bucket?
Thank you.
I haven't uploaded anything to S3 yet.
| Banno External Plugin S3 Hosting Question | We don't have material specific to Amazon Web Services (e.g., S3) but here's some general guidance which may be helpful.
No matter what, you'll need to make sure that your plugin's content is hosted by your public-facing web server. This means your web server must be accessible via the internet and cannot require the user to be on a specific network or VPN.
See these resources for more info:
Plugin Framework / Architecture / Hosting
Plugin Framework / Guides/ Designing and Developing Plugins
The gist is that how you build your plugin's web service is at your discretion.
Whether you have separate S3 buckets or a single S3 bucket, that's up to you to decide what's best for your plugin.
|
76382443 | 76384657 | Im currently designing a menu for a food festival. I use google sheets. I have a sheet filled with food choices. the menu for a given week should not have food items from the previous weeks. This is a mandatory requirement & I'm not able to get the drop-down if I use Data validation & custom formula.
I use =FILTER('Item Suggestions'!A:A,'Item Suggestions'!E:E="Y") as the custom formula for validating the data.
Is there any other way (or a tweak to get the data validation drop down) to get the drop-down & keep the drop-down list filtered?
Each of the columns in the "Menu" sheet should pick from column A of the "Item Suggestions" sheet. But the data should be based on column E. If Column E is Y, then that respective data in Column A should be shown in the drop-down
Menu Sheet:
Column A
Column B
Column C
Column D
Column E
Appetizers
Mains
Course 2
Dessert
Drinks
Soup
Chicken
Creamy Pasta
Strawberry Mousse
Fire and Ice
Broccoli
Rice & Curry
Roti & Subzi
Icecream Sundae
Mojito
Item Suggestions sheet:
Column A
Column B
Column C
Column D
Column E
Dish
Course
Allergens
Type
Used in Previous weeks?
Creamy Pasta
Main
Gluten
Vegetarian
Y
Chocolate Marble Cake
Dessert
Wheat
Vegan
N
| Google Sheets - Data Validation - Conditional based on Column in another sheet | Validation Helper Columns
I added some 'helper' columns for the validation.
They can be on the same sheet or a different one.
There is one for each course: Appetizer, Main, Dessert, and Drink. I assume Main and Course 2 both share the same dishes.
The FILTER formula would return an array of Dishes that match the correct Course, and haven't yet been used:
=IFERROR(FILTER(Dishes,Courses=thisCourse,isUsed<>"Y"),"-")
For the validation rule,
the criteria would be "Dropdown (from a range)" with the range being the appropriate helper column for each Course
The "Apply to range" value would be the appropriate Course column in your Menu table
Please note that all populated menu items will 'always' show the error flag (red triangle in the top right corner). This is because the moment they are used, they are no longer valid values. This doesn't affect the functionality of the menu dropdowns. Used menu items will be filtered from the dropdowns, and you will not be able to add a used menu item manually with the dropdown properties set to reject. Just a visual distraction.
Dropdown Formula
Single Formula
Will generate all dropdowns at once and centralizes modifications
=BYCOL(M2:P2, LAMBDA(c,
IFERROR(FILTER(G:G,H:H=c,K:K<>"Y"),"-")))
Individual Formula
Needs to be manually copied to each column
=IFERROR(FILTER($G:$G,$H:$H=M2,$K:$K<>"Y"),"-")
Filtering Formula
Dropdown with Filtered Dishes
Formula to Mark Dishes When Used
Your Master List of dishes includes a column to mark if a dish has been used previously
Your menu's dropdowns are based on that and it makes sense to update the "used/not used" status dynamically when a dish is added or removed from a menu.
This can be achieved using a formula in the Master List that marks each dish based on whether it exists in the menu.
Dish "Is Used" Formula
Note that the 'Single Formula' includes the column heading "Used in previous weeks?".
This was intentional in order to place the formula a line above the Master List data
Offsetting the formula from the data by a row, allows the data to be sorted without impacting the formula. For example, you could sort the Master List by any of Dishes, Courses, Allergens, or Type
Single Formula
={"Used in previous weeks?";
BYROW(G3:G51, LAMBDA(r,
IFERROR(IF(ROWS(FILTER(r,COUNTIF(A:E,r)))>0,"Y"))))}
Individual Formulas
Needs to be copied to each row
=IFERROR(IF(ROWS(FILTER(G3,COUNTIF(A:E,G3)))>0,"Y"))
Sorted Asc. by Dishes
Sorted Asc. by Courses then Dishes
|
76382591 | 76384670 | In Ubuntu-22, google-cloud has been installed through snap store;
> whereis gcloud
gcloud: /snap/bin/gcloud
> snap list | grep google
google-cloud-sdk 432.0.0 346 latest/stable google-cloud-sdk** classic
Docker has been installed via snap too;
> snap list | grep docker
docker 20.10.24 2893 latest/stable canonical**
And I have authenticated my account to a private GCR as below;
> gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?...<long_url>
You are now logged in as [<my_email@address.com>].
Your current project is [<desired_project_name>]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
Double-checked the login process;
> gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* <my_email@address.com>
To set the active account, run:
$ gcloud config set account `ACCOUNT`
But, when I try to pull or push any image, I hit the following permission issue;
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I am able to access to the image which I try to pull from the private GCR in my browser, this makes me think that it is an issue related to creds while performing docker pull in my terminal.
What am I missing here?
PS: The solution in this question did not work for me Unable to push to Google Container Registry - Permission issue
EDIT:
As it is asked in the comments, I need to mention that I have performed one more step before auth login which is gcloud auth configure-docker;
> gcloud auth configure-docker
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
After update, the following will be written to your Docker config file located at
[/home/<user>/.docker/config.json]:
{
"credHelpers": {
"gcr.io": "gcloud",
"us.gcr.io": "gcloud",
...
}
}
Do you want to continue (Y/n)?
Docker configuration file updated.
| Google Container Registry: Permission issue while trying to pull/push images with authenticated credentials | Removing snap installation and installing docker with package manager apt has fixed my issue.
The difference I have observed between two installations;
With snap, once gcloud auth login directs me to browser, authentication was completed by choosing google account only (Please see the 3rd code block in my question, no authorization code was asked).
With apt, after choosing the desired google account, I was directed to another page where the authorization code was provided which needed to be entered in the terminal;
> gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?...<long_url>
Enter authorization code: <Code_from_browser> // This is the difference!!
You are now logged in as [<my_email@address.com>].
Your current project is [<desired_project_name>]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
Thank you @JohnHanley pointed out that docker recommends apt installation.
|
76385271 | 76386421 | I'm trying to optimize my Postgres query. I'm running into problems with some of the joins here. My main issue is around the filter h.type='inNetwork' and my geometry search ST_Intersects(ST_MakeValid(ser.boundaries)::geography, ST_MakeValid(ST_SetSRID(ST_GeomFromGeoJson(<INSERT_GEOMETRY_JSON>)). Something about that specific filter increase the search time by ~10x. The other filters don't seem to have much of an effect on the search speed. As a side note, there are additional filters that are used conditionally and that's why some of the join tables here seem irrelevant.
Query in question:
SELECT DISTINCT r.id, r.profitability
FROM rolloff_pricing as r
LEFT JOIN service_areas ser on r.service_area_id = ser.id
LEFT JOIN sizes as s on r.size_id = s.id
LEFT JOIN sizes as sa on r.sell_as = sa.id
LEFT JOIN waste_types w on w.id = r.waste_type_id
LEFT JOIN regions reg on reg.id = ser.region_id
LEFT JOIN haulers h on h.id = reg.hauler_id
LEFT JOIN current_availability ca on ca.region_id = reg.id
LEFT JOIN regions_availability ra on ra.region_id = reg.id
LEFT JOIN current_availability_new_deliveries cand on ca.id = cand.current_availability_id and r.size_id = cand.size_id
LEFT JOIN exceptions ex on ex.region_id = reg.id
WHERE ser.active is true
and ST_Intersects(ST_MakeValid(ser.boundaries)::geography, ST_MakeValid(ST_SetSRID(ST_GeomFromGeoJson('{
"type": "POINT",
"coordinates": [
"-95.3595563",
"29.7634871"
]
}'),4326))::geography)
and h.active is true
and ra.delivery_type='newDeliveries'
and h.type='inNetwork'
GROUP BY r.id ORDER BY profitability desc OFFSET 0 ROWS FETCH NEXT 8 ROWS ONLY
Here is the EXPLAIN (ANALYZE, BUFFERS):
Limit (cost=246.23..246.29 rows=8 width=21) (actual time=3711.860..3711.866 rows=8 loops=1)
Buffers: shared hit=15048
-> Unique (cost=246.23..246.29 rows=8 width=21) (actual time=3711.859..3711.860 rows=8 loops=1)
Buffers: shared hit=15048
-> Sort (cost=246.23..246.25 rows=8 width=21) (actual time=3711.858..3711.858 rows=8 loops=1)
" Sort Key: r.profitability DESC, r.id"
Sort Method: quicksort Memory: 28kB
Buffers: shared hit=15048
-> Group (cost=246.07..246.11 rows=8 width=21) (actual time=3711.820..3711.841 rows=48 loops=1)
Group Key: r.id
Buffers: shared hit=15048
-> Sort (cost=246.07..246.09 rows=8 width=21) (actual time=3711.817..3711.823 rows=216 loops=1)
Sort Key: r.id
Sort Method: quicksort Memory: 41kB
Buffers: shared hit=15048
-> Hash Left Join (cost=154.30..245.95 rows=8 width=21) (actual time=3711.508..3711.745 rows=216 loops=1)
Hash Cond: ((reg.id)::text = (ex.region_id)::text)
Buffers: shared hit=15048
-> Hash Join (cost=150.45..242.05 rows=8 width=37) (actual time=3711.490..3711.705 rows=144 loops=1)
Hash Cond: ((ra.region_id)::text = (reg.id)::text)
Buffers: shared hit=15045
-> Seq Scan on regions_availability ra (cost=0.00..89.11 rows=643 width=16) (actual time=0.006..0.186 rows=643 loops=1)
Filter: ((delivery_type)::text = 'newDeliveries'::text)
Rows Removed by Filter: 1286
Buffers: shared hit=65
-> Hash (cost=150.34..150.34 rows=9 width=53) (actual time=3711.461..3711.461 rows=144 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 21kB
Buffers: shared hit=14980
-> Hash Right Join (cost=73.72..150.34 rows=9 width=53) (actual time=3711.218..3711.442 rows=144 loops=1)
Hash Cond: ((ca.region_id)::text = (reg.id)::text)
Buffers: shared hit=14980
-> Seq Scan on current_availability ca (cost=0.00..69.02 rows=2002 width=32) (actual time=0.009..0.124 rows=2002 loops=1)
Buffers: shared hit=49
-> Hash (cost=73.68..73.68 rows=3 width=69) (actual time=3711.173..3711.173 rows=48 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
Buffers: shared hit=14931
-> Nested Loop (cost=0.84..73.68 rows=3 width=69) (actual time=2262.438..3711.145 rows=48 loops=1)
Buffers: shared hit=14931
-> Nested Loop (cost=0.55..44.90 rows=1 width=48) (actual time=2262.424..3710.955 rows=7 loops=1)
Buffers: shared hit=14877
-> Nested Loop (cost=0.28..38.20 rows=3 width=16) (actual time=0.012..4.723 rows=609 loops=1)
Buffers: shared hit=1418
-> Seq Scan on haulers h (cost=0.00..21.60 rows=2 width=16) (actual time=0.003..0.698 rows=439 loops=1)
Filter: ((active IS TRUE) AND ((type)::text = 'inNetwork'::text))
Rows Removed by Filter: 89
Buffers: shared hit=15
-> Index Scan using regions_hauler_id_idx on regions reg (cost=0.28..8.29 rows=1 width=32) (actual time=0.006..0.007 rows=1 loops=439)
Index Cond: ((hauler_id)::text = (h.id)::text)
Buffers: shared hit=1403
-> Index Scan using service_areas_region_id_idx on service_areas ser (cost=0.28..2.22 rows=1 width=32) (actual time=6.035..6.085 rows=0 loops=609)
Index Cond: ((region_id)::text = (reg.id)::text)
" Filter: ((active IS TRUE) AND ((st_makevalid(boundaries))::geography && '0101000020E610000087646DF802D757C0FA6AFDE373C33D40'::geography) AND (_st_distance((st_makevalid(boundaries))::geography, '0101000020E610000087646DF802D757C0FA6AFDE373C33D40'::geography, '0'::double precision, false) < '1.00000000000000008e-05'::double precision))"
Rows Removed by Filter: 3
Buffers: shared hit=13459
-> Index Scan using rolloff_pricing_service_area_id_idx on rolloff_pricing r (cost=0.29..28.70 rows=8 width=83) (actual time=0.013..0.019 rows=7 loops=7)
Index Cond: ((service_area_id)::text = (ser.id)::text)
Buffers: shared hit=54
-> Hash (cost=3.38..3.38 rows=38 width=48) (actual time=0.012..0.012 rows=39 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
Buffers: shared hit=3
-> Seq Scan on exceptions ex (cost=0.00..3.38 rows=38 width=48) (actual time=0.003..0.007 rows=39 loops=1)
Buffers: shared hit=3
Planning Time: 1.031 ms
Execution Time: 3711.956 ms
My understanding of what's happening here is that my filter is bringing in additional rows (the large majority of the h table is true for the condition h.type='inNetwork', which is making my geometry query run for a much larger set of rows than intended.
Something that I've tried is putting the geometry query into a subquery (because the geometry query actually runs pretty quickly itself) to get a set of r.id's that I could use in an where in clause. This seems to not work as well though. Here is my modified query that also is too slow:
SELECT DISTINCT r.id, r.profitability
FROM rolloff_pricing as r
LEFT JOIN service_areas ser on r.service_area_id = ser.id
LEFT JOIN sizes as s on r.size_id = s.id
LEFT JOIN sizes as sa on r.sell_as = sa.id
RIGHT JOIN waste_types w on w.id = r.waste_type_id
RIGHT JOIN regions reg on reg.id = ser.region_id
RIGHT JOIN haulers h on h.id = ser.hauler_id
RIGHT JOIN current_availability ca on ca.region_id = reg.id
RIGHT JOIN regions_availability ra on ra.region_id = reg.id
LEFT JOIN current_availability_new_deliveries cand on ca.id = cand.current_availability_id and r.size_id = cand.size_id
RIGHT JOIN exceptions ex on ex.region_id = reg.id
WHERE r.id in (
select r2.id
from rolloff_pricing as r2
LEFT JOIN service_areas ser2 on r2.service_area_id = ser2.id
WHERE
ST_Intersects(ST_MakeValid(ser2.boundaries)::geography, ST_MakeValid(ST_SetSRID(ST_GeomFromGeoJson('{
"type": "POINT",
"coordinates": [
"-95.3595563",
"29.7634871"
]
}'),4326))::geography)
and ser2.active is true
)
and h.active is true
and ra.delivery_type='newDeliveries'
and h.type='inNetwork'
GROUP BY r.id ORDER BY profitability desc OFFSET 0 ROWS FETCH NEXT 8 ROWS ONLY
It's interesting to me because the subquery here resolves by itself very quickly. And if I sub out the subquery for the returned results, the whole thing resolves very quickly as well. So I'm not sure how to approach this exactly. My next guess is to just run a completely seperate query for the r.ids and then pass them through to the "main" query.
Maybe relevant info: this query is being generated and executed in an eloquent-based api
How can I go about approaching improving the speed here?
| Optimizing joins in a postgres/postgis query |
the large majority of the h table is true for the condition h.type='inNetwork', which is making my geometry query run for a much larger set of rows than intended.
I don't understand. If most of the table meets the condition h.active is true and h.type='inNetwork', then most of the table gets processed. What else could have possibly been intended? The estimate there is pretty horrible (estimated rows 2, actual 439) but that must be just because your stats are horribly out of date. There is really no good reason for the estimate to be off by so much. You should run VACUUM ANALYZE on all tables involved in this query, after making sure there are no transactions being held open. If it doesn't fix the query directly, it will at least produce plans which are easier to understand.
-> Index Scan using service_areas_region_id_idx on service_areas ser (cost=0.28..2.22 rows=1 width=32) (actual time=6.035..6.085 rows=0 loops=609)
Index Cond: ((region_id)::text = (reg.id)::text)
Filter: ((active IS TRUE) AND ((st_makevalid(boundaries))::geography && '0101000020E610000087646DF802D757C0FA6AFDE373C33D40'::geography) AND (_st_distance((st_makevalid(boundaries))::geography, '0101000020E610000087646DF802D757C0FA6AFDE373C33D40'::geography, '0'::double precision, false) < '1.00000000000000008e-05'::double precision))"
Rows Removed by Filter: 3
Buffers: shared hit=13459
This is the one place which takes pretty much all of the time. And it is hard to understand what it is actually doing. Why on earth would it take 13459 buffers hits to run this scan 609 times? that is 22 buffers for each loop, but descending an index should only take 3 to 4 buffer accesses. You could be filtering out a lot of rows due to the filter condition, but in that case Rows Removed by Filter would need to be a lot more than 3. Maybe the index is stuffed full of dead entries, which then get filtered out but don't get tabulated in 'Rows Removed by Filter'. (A VACUUM when there are no transactions being held open would fix that). Or maybe the geometry column is very large and so gets TOASTed over many pages.
And to be clear here, I'm not saying the large number of buffer accesses are causing the slowness, they are all "buffer hits" afterall and so should be fast. But it is an oddity that should be investigated.
|
76383772 | 76384721 | I am creating a link list widget. Whenever we clicked on Title button, it adds a list with add link option. Everything is working fine but whenever I am trying to click the remove title button. The button is not working anymore. Where the alert is working and logs are showing also in console but the action is not working. Can you please check?
Code Pen Codepen Reference Link
| Link list sidebar widget remove button not working in jquery | At line 23 you are only selecting a parent HTML element when the remove-title button is clicked, but doing nothing with it.
if ($(".remove-title").length) {
$("body").on("click", ".remove-title", function() {
console.log("clicked");
$(this).parents(".btn-options");
});
}
First you should make sure you're not deleting the only section left. If so, the .add-title button is triggered first.
if ($(".remove-title").length) {
$("body").on("click", ".remove-title", function() {
console.log("clicked");
var $this = $(this);
var parent = $this.parents(".title-area");
if (!parent.siblings(".title-area").length) {
$this.siblings(".add-title").trigger("click");
}
var title = parent.find("input.title-text").eq(0).val();
var toast = parent.siblings(".title-area").find(".btn-toast");
toast.show().html("Title removed: <span>" + title + "</span>")
.delay(400).fadeOut("slow");
parent.remove();
});
}
As you can see, I've also added a "toast" element to the btn-options div to briefly display the title of the deleted section. You can find the rest of those changes here: Codepen
|
76387124 | 76387192 | Adding and removing classes to elements in React
I’m a complete newb with React but basically I’ve been trying to make it so that a container goes from hidden to visible when you click a button.
Usually I’d just do an eventListner and add.classList or remove.classList but I can’t find the equivalent to that in react ?
I’ve tried figuring it out with useState and everything but I feel like I’m just overlooking something simple.
I would really appreciate some help it’s been like hours and my tiny brain is gonna explode
| What is the equivalent of add.classList in React for toggling visibility? | I would recommend adding a condition to render the element/component instead of using classes.
const [visible, setVisible] = useState(false);
return (
<div>
<button onClick={() => setVisible(!visible)}>toggle</button>
{visible && <span>hello</span>}
</div>
);
|
76383780 | 76384730 | I have the following point configuration:
import numpy as np
T=np.array([9,9])
X1=np.array([8,16])
X2=np.array([16,3])
points=np.array([[4, 15],
[13,17],
[2, 5],
[16,8]])
This can be represented as:
Given T, X1, and X2, I want to find all points of the array points that are inside the yellow region. This yellow region is always in the "opposite side" of the points X1 and X2.
How can I achieve this in a simple and efficient way?
Edit1 (trying B Remmelzwaal solution)
T=np.array([9,9])
X1=np.array([10,2])
X2=np.array([2,15])
points=np.array([[2, 5]])
valid_points = list()
# calculating y = mx + b for line T, X1
slope1 = np.diff(list(zip(T, X1)))
m1 = np.divide(slope1[1], slope1[0])
b1 = T[1] - m1*T[0]
# calculating y = mx + b for line T, X2
slope2 = np.diff(list(zip(T, X2)))
m2 = np.divide(slope2[1], slope2[0])
b2 = T[1] - m2*T[0]
for point in points:
# check if point is under both lines
for m, b in (m1, b1), (m2, b2):
if point[1] > m*point[0] + b:
break
else:
# only append if both checks pass
valid_points.append(point)
print(valid_points)
The configuration is the following:
and the code returns returns [2,5] and it should return []. This is not correct since the region of interest is now in the opposite region (see image)
| Find points inside region delimited by two lines | The naive solution to this can be thought of as a series of stages
embed the values into equations in a Two-Point Form
for each line defined by the Equations
for each point in the collection to compare
at X, see if Y is below the line value
boolean AND on the results, such that only values below both lines match
However, this can be much faster with NumPy's powerful numeric methods, as you can directly use the values in collections without bothering to create the intermediate equations, but need to then pose it in a manner it expects and would make more sense to do for a great number of lines (hundreds, millions..)
very extended approach
import numpy as np
T=np.array([9,9])
X1=np.array([8,16])
X2=np.array([16,3])
points=np.array([[4, 15],
[13,17],
[2, 5],
[16,8]])
equations = []
for point_pair in [(T, X1), (T, X2)]:
# extract points
(x1, y1), (x2, y2) = point_pair # unpack
# create equation as a function of X to get Y
fn = lambda x, x1=x1, y1=y1, x2=x2, y2=y2: (y2-y1)/(x2-x1)*(x-x1)+y1
equations.append(fn)
results = {} # dict mapping lines to their point comparisons
for index, equation in enumerate(equations):
key_name = "line_{}".format(index + 1)
results_eq = []
for point in points:
point_x, point_y = point # unpack
line_y = equation(point_x)
results_eq.append(point_y < line_y) # single bool
array = np.array(results_eq) # list -> array of bools
results[key_name] = array # dict of arrays of bools
# & is used here to compare boolean arrays such that both are True
final_truthyness = results["line_1"] & results["line_2"]
print(final_truthyness)
>>> print(final_truthyness)
[False False True False]
Alternatively, you can carefully order your points and take the Cross Product
NOTE that the point ordering matters here such that points below are really to the right of the line (vector), you can calculate this by comparing the X values of the points
>>> X1[0] < T[0], X2[0] < T[0] # determine point ordering
(True, False)
>>> a = np.cross(points - X1, T - X1) > 0
>>> b = np.cross(points - T, X2 - T) > 0
>>> a,b ; a&b # individual arrays ; AND
(array([ True, False, True, False]), array([False, False, True, False]))
array([False, False, True, False])
Finally, you might take some caution in a larger program to special case point pairs which are exactly the same point
|
76384948 | 76386605 | Xdebug does not stop at breakpoints.
I tried different versions of Xdebug. (current v1.32.1, v1.32.0, v1.31.1, v1.31.0, v1.30.0)
This is my configuration at the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 0,
"runtimeArgs": [
"-dxdebug.start_with_request=yes"
],
"env": {
"XDEBUG_MODE": "debug,develop",
"XDEBUG_CONFIG": "client_port=${port}"
}
},
{
"name": "Launch Built-in web server",
"type": "php",
"request": "launch",
"runtimeArgs": [
"-dxdebug.mode=debug",
"-dxdebug.start_with_request=yes",
"-S",
"localhost:0"
],
"program": "",
"cwd": "${workspaceRoot}",
"port": 9003,
"serverReadyAction": {
"pattern": "Development Server \\(http://localhost:([0-9]+)\\) started",
"uriFormat": "http://localhost:%s",
"action": "openExternally"
}
}
]}
Could there be a conflict with the web server?
This is my configuration in the php.ini (all the way at the bottom of the file):
[xdebug]
zend_extension=C:\xampp\php\ext\php_xdebug.dll
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.client_port = 9003 // it also doesnt work without this line.
Installation Wizard
Summary from <https://xdebug.org/wizard>
• Xdebug installed: 3.2.1
• Server API: Apache 2.0 Handler
• Windows: yes
• Compiler: MS VS16
• Architecture: x64
• Zend Server: no
• PHP Version: 8.2.4
• Zend API nr: 420220829
• PHP API nr: 20220829
• Debug Build: no
• Thread Safe Build: yes
• OPcache Loaded: no
• Configuration File Path: no value
• Configuration File: C:\xampp\php\php.ini
• Extensions directory: C:\xampp\php\ext
I've downloaded the file from the wizard and renamed it correctly.
Port 9003 is correct according to the documentation. But I also tried port 9000 as well.
If I go to https://portchecker.co/checking and check for Port 9000 or 9003 they are closed.
I reinstalled XAMPP
I reinstalled VS Code
I also went to Settings -> Features -> Debug -> Allow Breakpoints Everywhere.
| Xdebug not stopping at breakpoints | I went ahead and installed xampp and xdebug.
Our launch.json files and wizard output are identical and it seems to work ok for me.
My php.ini doesn't include the last line with the port number or the apostrophe after the xdebug closing brace that you have.
[xDebug]
zend_extension = xdebug
xdebug.mode = debug
xdebug.start_with_request = yes
You shouldn't need the entire file path for your zend_extension considering that the extensions directory is already mapped for you.
I would double check that you've defined your "php.debug.executablePath": in vscode settings as well.
Here's what mine looks like after
Running Listen for Xdebug in vscode
Navigating to the file I used here at http://localhost/info/index.php in the browser
If making those changes still doesn't work, maybe double check that the breakpoint is actually reachable.
|
76382864 | 76384737 | I have tried severally to perform some numeric aggregation methods on numeric data with pandas. However, I have received a NotImplementedError, which then throws a TypeError, whenever I do so. I hypothesize that pandas is refusing to ignore the string columns when performing said numerical tasks. How do I prevent this?
Given a pivot table named matrix_data, and with pandas imported as pan:
Account Number Company Contact Account Manager Product Licenses
0 2123398 Google Larry Pager Edward Thorp Analytics 150
1 2123398 Google Larry Pager Edward Thorp Prediction 150
2 2123398 Google Larry Pager Edward Thorp Tracking 300
3 2192650 BOBO Larry Pager Edward Thorp Analytics 150
4 420496 IKEA Elon Tusk Edward Thorp Analytics 300
Sale Price Status
0 2100000 Presented
1 700000 Presented
2 350000 Under Review
3 2450000 Lost
4 4550000 Won
Trying to aggregate all numerical values by company:
pan.pivot_table(matrix_data, index = "Company", aggfunc="mean");
throws an exception like so:
NotImplementedError Traceback (most recent call last)
File ~\AppData\Roaming\Python\Python311\site-packages\pandas\core\groupby\groupby.py:1490, in GroupBy._cython_agg_general..array_func(values)
1489 try:
-> 1490 result = self.grouper._cython_operation(
1491 "aggregate",
1492 values,
1493 how,
1494 axis=data.ndim - 1,
1495 min_count=min_count,
1496 **kwargs,
1497 )
1498 except NotImplementedError:
1499 # generally if we have numeric_only=False
1500 # and non-applicable functions
...
1698 # e.g. "foo"
-> 1699 raise TypeError(f"Could not convert {x} to numeric") from err
1700 return x
TypeError: Could not convert Larry PagerLarry PagerLarry Pager to numeric
dataframe.groupby(["col_name1"]).mean() will throw an identical error
I'm on windows 10, python 3.11, with pandas version 2.0.1. All this was performed on Jupyter Notebook with VScode
| How do I prevent 'NotImplementedError' and 'TypeError' when using numeric aggregate functions in Pandas pivot tables with string columns? | This has been deprecated in Pandas 2.0. This is the warning pandas 1.5.3 gives:
FutureWarning: pivot_table dropped a column because it failed to
aggregate. This behavior is deprecated and will raise in a future
version of pandas. Select only the columns that can be aggregated.
You now have to select the specific columns you want to aggregate.
cols = ['Licenses', 'Sale Price']
pd.pivot_table(matrix_data, values=cols, index="Company", aggfunc="mean")
|
76387117 | 76387222 | Can I assume the following code will always pass my assertions? I'm a worried about the index value. I'm not sure if the scoped value will be passed along to the Task.Run lambda expression. I think it will be scoped just like the attributesChunked value seems to be. But I'd like some confirmation.
var tasks = new List<Task>();
var attributes = new string[4]{ "att1", "att2", "att3", "att4" };
var chunkIndex = -1;
foreach (var attributesChunked in attributes.Chunk(2))
{
var index = Interlocked.Increment(ref chunkIndex);
tasks.Add(Task.Run(() =>
{
if (index == 0)
Assert.Equal("attr1", attributesChunked.First());
if (index == 2)
Assert.Equal("attr3", attributesChunked.First());
}
}
await Task.WhenAll(tasks);
| Can I be guaranteed that a variable value will be passed along to a new running Task? | For each task you create, you are introducing a closure on the index variable declared in that iteration. So yes your Task will use the right value, as can be tested using following code:
static async Task Main(string[] args)
{
var tasks = new List<Task>();
var attributes = new string[4] { "att1", "att2", "att3", "att4" };
var chunkIndex = -1;
foreach (var attributesChunked in attributes.Chunk(2))
{
var index = Interlocked.Increment(ref chunkIndex);
tasks.Add(Task.Run(async () =>
{
//simulate some work...
await Task.Delay(50);
Console.WriteLine($"index: {index}");
}));
}
await Task.WhenAll(tasks);
}
The result:
index: 0
index: 1
Note that the sequence of the output is entirely dependent on the scheduling of the individual Tasks and cannot be predicted...
Useful reference: What are 'closures' in .NET?
|
76384430 | 76386630 | I'm experiencing a strange (bug?) when importing the yaml-cpp static library with CMake.
main.cpp
#include "yaml-cpp/yaml.h"
CMakeLists.txt (working)
add_library(yaml-cpp ${PROJECT_BINARY_DIR}/path/to/libyaml-cpp.a)
target_link_libraries(main yaml-cpp)
CMakeLists.txt (not working)
add_library(yaml-cpp STATIC IMPORTED)
set_target_properties(yaml-cpp PROPERTIES IMPORTED_LOCATION ${PROJECT_BINARY_DIR}/path/to/libyaml-cpp.a)
target_link_libraries(main yaml-cpp)
When I use the second CMakeLists.txt, my main.cpp cannot find yaml-cpp/yaml.h. When I use the first CMakeLists.txt, it can, however, I get the "ar: no archive members specified" message every time I configure the project, which is annoying. Would like to import it the second way to get rid of that message.
| Strange behavior from CMake when importing a STATIC library | For both of you who answered, I appreciate it. Turns out I should have provided more information in my question. The issue was arising basically from the fact that I am attempting to create a portable installation, with the entire source of each of the dependencies within the project folder-structure, which is something that I haven't attempted before. It seemed logical to me that the library files (.a, .dylib, etc..) would contain all of the headers within them, but apparently that is not the case. I will provide a few details on how I was able to fix the issue.
Building the libraries before the rest of the project was the right move, but I forgot to install them. cmake -> make -> make install
BuildLibraries.txt (cmake file)
set(yaml-cpp_cmakelists "${CMAKE_SOURCE_DIR}/external/yaml-cpp-master")
set(yaml-cpp_build_location "${CMAKE_BINARY_DIR}/external/yaml-cpp-master")
file(MAKE_DIRECTORY ${yaml-cpp_build_location})
execute_process(
COMMAND ${CMAKE_COMMAND} -S ${yaml-cpp_cmakelists} -B ${yaml-cpp_build_location} -D CMAKE_INSTALL_PREFIX=${CMAKE_LIBRARY_OUTPUT_DIRECTORY} -D BUILD_SHARED_LIBS=OFF
WORKING_DIRECTORY ${yaml-cpp_build_location}
RESULT_VARIABLE result
)
if(NOT result EQUAL 0)
message(FATAL_ERROR "Failed to configure yaml-cpp")
endif()
execute_process(
COMMAND make -C ${yaml-cpp_build_location} -j4
WORKING_DIRECTORY ${yaml-cpp_build_location}
RESULT_VARIABLE result
)
if(NOT result EQUAL 0)
message(FATAL_ERROR "Failed to generate yaml-cpp")
endif()
execute_process(
COMMAND make install -C ${yaml-cpp_build_location} -j4
WORKING_DIRECTORY ${yaml-cpp_build_location}
RESULT_VARIABLE result
)
if(NOT result EQUAL 0)
message(FATAL_ERROR "Failed to install yaml-cpp")
endif()
Inside project-root directory CMakeLists.txt:
ensure that find_package() knows where to look for the libraries that you installed using set(CMAKE_PREFIX_PATH ...)
go ahead and use find_package(). It is, as these users suggested, much easier.
CMakeLists.txt (in root project directory)
cmake_minimum_required(VERSION 3.26.0)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(REBUILD_LIBS ON) # choose wether to rebuild libraries during the generate phase
set(PROJECT_BINARY_DIR "${CMAKE_SOURCE_DIR}/build") # root build directory
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}") # static libraries
set(CMAKE_INSTALL_LIBDIR ${PROJECT_BINARY_DIR}/lib)
set(CMAKE_INSTALL_BINDIR ${PROJECT_BINARY_DIR})
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}") # shared libraries
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}") # executables
set(CMAKE_PREFIX_PATH # define search-paths for find_package()
"${PROJECT_BINARY_DIR}/lib/cmake"
)
if(REBUILD_LIBS)
include(${CMAKE_SOURCE_DIR}/CMakeFiles/BuildLibraries.txt) # Build external libraries using the BuildLibraries.txt CMakeLists.txt file
endif()
project(myProject)
add_executable(main main.cpp)
find_package (yaml-cpp)
target_link_libraries(main yaml-cpp)
|
76384877 | 76386661 | I'm fairly new to Rshiny and looking for some help to understand how to create a plot using slider values as input. The user selected slider values are displayed as a table, and used as inputs to calculate an equation. The resulting calculated values are stored in a table (if possible I'd like to be able to download the generated values as a csv file) and used to generate a simple plot. This is what I have so far:
ui <- fluidPage(titlePanel(p("title", style = "color:#3474A7"),
sidebarLayout(
sidebarPanel(
sliderInput("Max", "Max:",
min = 0, max = 1000,
value = 116.8, step=0.1),
sliderInput("Rate", "Rate:",
min = 0, max = 5,
value = 0.12, step=0.01),
sliderInput("Inflection", "Inflection:",
min = 0, max = 20,
value = 11.06, step=0.01),
sliderInput("Area", "Area:",
min = 0, max = 10000,
value = 180, step=20),
p("Made with", a("Shiny",
href = "http://shiny.rstudio.com"), "."),
),
mainPanel(
# Output: Table summarizing the values entered ----
tableOutput("values"),
plotOutput("plot")
)
)
)
#use the slider values to estimate growth for any given year using the equation
Growth = function(x, A, B, C, R)
{R *(A *exp(-B * C^x))}
#create a Table Ouptut with selected Slider values
server <- function(input, output){
# Reactive expression to create data frame of all input values ----
sliderValues <- reactive({
data.frame(
Name = c("Max",
"Inflection",
"Rate",
"Area"),
Value = as.character(c(input$Max,
input$Inflection,
input$Rate,
input$Area)),
stringsAsFactors = FALSE)
})
# Show the values in an HTML table ----
output$values <- renderTable({
sliderValues()
})
#reactive expression to let users download the data as a table
##restC <- reactive({
#run the code for all time with the selected parameters from the slider
mylist<-list(c(1:50))
mydata<-data.frame(lapply(mylist, Growth, A=values$Max, B=values$Rate, C=values$Inflection, R=values$Area),mylist)
names(mydata)[1]<-"Pop"
names(mydata)[2]<-"Time"
#output$my_table<-renderDataTable({
#restC()
# })
#plot the values in a graph
output$plot <- renderPlot({
ggplot(mydata, aes(Time,Pop)) + geom_line()
})
}
shinyApp(ui = ui, server = server)
| Rshiny-Use slider values to cacluate a dataset and plot the calculated values | Made some tweakings in your code; now it does what you want:
library(shiny)
library(ggplot2)
ui <- fluidPage(titlePanel(p("title", style = "color:#3474A7")),
sidebarLayout(
sidebarPanel(
sliderInput("Max", "Max:",
min = 0, max = 1000,
value = 116.8, step=0.1),
sliderInput("Rate", "Rate:",
min = 0, max = 5,
value = 0.12, step=0.01),
sliderInput("Inflection", "Inflection:",
min = 0, max = 20,
value = 11.06, step=0.01),
sliderInput("Area", "Area:",
min = 0, max = 10000,
value = 180, step=20),
downloadButton("download", "Download data"),
p("Made with", a("Shiny",
href = "http://shiny.rstudio.com"), "."),
),
mainPanel(
# Output: Table summarizing the values entered ----
tableOutput("values"),
plotOutput("plot")
)
)
)
#use the slider values to estimate growth for any given year using the equation
Growth = function(x, A, B, C, R)
{R *(A *exp(-B * C^x))}
#create a Table Ouptut with selected Slider values
server <- function(input, output){
# Reactive expression to create data frame of all input values ----
sliderValues <- reactive({
data.frame(
Name = c("Max",
"Inflection",
"Rate",
"Area"),
Value = as.character(c(input$Max,
input$Inflection,
input$Rate,
input$Area)),
stringsAsFactors = FALSE)
})
# Show the values in an HTML table ----
output$values <- renderTable({
sliderValues()
})
#reactive expression to let users download the data as a table
restC <- reactive({
#run the code for all time with the selected parameters from the slider
mylist<-list(c(1:50))
mydata<-data.frame(lapply(mylist, Growth, A=input$Max, B=input$Rate, C=input$Inflection, R=input$Area),mylist)
names(mydata)[1]<-"Pop"
names(mydata)[2]<-"Time"
mydata
})
#output$my_table<-renderDataTable({
#restC()
# })
#plot the values in a graph
output$plot <- renderPlot({
ggplot(restC(), aes(Time,Pop)) + geom_line()
})
output$download <- downloadHandler(
filename = function() {
paste("data-", Sys.Date(), ".csv", sep="")
},
content = function(file) {
write.csv(restC(), file)
}
)
}
shinyApp(ui = ui, server = server)
|
76382956 | 76384761 | I've been trying for couple days to integrate a poor Bootstrap theme template in a React app with no success.
So, I've created a new application in my folder. All good. Installed all the packages required by the theme and upgraded to the latest version. All good.
Now, let's customize the App.js component in React with some custom code:
function App() {
return (
<div className="App">
<section className="slice slice-lg delimiter-top delimiter-bottom">
<div className="container">
<div className="row mb-6 justify-content-center text-center">
<div className="col-lg-8 col-md-10">
<span className="badge badge-primary badge-pill">
What we do
</span>
<h3 className="mt-4">Leading digital agency for <span className="text-warning typed" id="type-example-1" data-type-this="business, modern, dedicated"></span> solutions</h3>
</div>
</div>
<div className="row row-grid">
<div className="col-md-4">
<div className="pb-4">
<div className="icon">
<img alt="Image placeholder" src="../../assets/img/svg/icons/Apps.svg" className="svg-inject img-fluid" />
</div>
</div>
<h5>Designed for developers</h5>
<p className="text-muted mb-0">Quick contains components and pages that are built to be customized and used in any combination.</p>
</div>
<div className="col-md-4">
<div className="pb-4">
<div className="icon">
<img alt="Image placeholder" src="../../assets/img/svg/icons/Ballance.svg" className="svg-inject img-fluid" />
</div>
</div>
<h5>Responsive and scalable</h5>
<p className="text-muted mb-0">Scalable and easy to maintain, Quick enables consistency while developing new features and pages.</p>
</div>
<div className="col-md-4">
<div className="pb-4">
<div className="icon">
<img alt="Image placeholder" src="../../assets/img/svg/icons/Book.svg" className="svg-inject img-fluid" />
</div>
</div>
<h5 className="">Very well documented</h5>
<p className="text-muted mb-0">No matter you are a developer or new to web design, you will find our theme very easy to customize with an intuitive code.</p>
</div>
</div>
</div>
</section>
</div>
);
}
export default App;
Now, let's import everything we need from the custom theme in index.js:
...
import './assets/css/quick-website.css';
import './assets/js/quick-website.js';
import './assets/libs/@fortawesome/fontawesome-free/css/all.min.css';
import './assets/libs/jquery/dist/jquery.min.js';
...
However, when I import the core JS (quick-website.js) file of the time, I get this types of errors:
From quick-website.js
From 'jquery.min.js`
What am I missing here?
| React - integrate custom Bootstrap theme | Rather than importing jquery.min.js from assets, you should use npm to install the jquery package and then import relevant modules in the files where you need them. This is a more "react" way of doing things and it's much easier to update dependencies from the command line.
Run npm install jquery from the command line
Now, just import modules where you need them (in this case, quick-website.js):
import $ from 'jquery'
In general, if you see the error 'Something' is not defined, checking if you have imported the module is a good place to start.
|
76383351 | 76384817 | After updating Google.Apis.Oauth2.v2 NuGet package to v. 1.60.0.1869, I start getting exception Access to the path C:\Users is denied when trying login with Google in my UWP app. Here's my code:
string fname = @"Assets\User\Auth\google_client_secrets.json";
StorageFolder InstallationFolder = Windows.ApplicationModel.Package.Current.InstalledLocation;
var stream = await InstallationFolder.OpenStreamForReadAsync(fname);
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
stream,
new[] { "profile", "email" },
"me",
CancellationToken.None);
The exception occurs in GoogleWebAuthorizationBroker.AuthorizeAsync call.
This code (with some light changes) worked well before with Google.Apis.Oauth2.v2 package v. 1.25.0.859, but now this package is obsolete and doesn't work anymore.
How to login with Google in my UWP app?
NOTE: I understand that UWP app doesn't have access to c:\Users, but my code never request anything in the folder. google_client_secrets.json exists and I can read it in the app from the stream, so this file is unrelated to the issue.
UPDATE
After I set the 5th parameter of AuthorizeAsync like this:
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
stream,
new[] { "profile", "email" },
"me",
CancellationToken.None
new FileDataStore(ApplicationData.Current.LocalCacheFolder.Path, true));
the exception is gone. Now the execution thread just dies inside the AuthorizeAsync and I start getting the following error popup:
| Google.Apis.Oauth2.v2 + UWP: Access to path c:\users is denied | After some tried, I failed to make Google.Apis.Oauth2.v2 NuGet package v. 1.60.0.1869 to work with UWP. I made Google Login works by removing the NuGet and implementing OAuth flow myself as described in my answer here.
|
76387092 | 76387227 | WaitGroups, Buffered Channels, and Deadlocks
I have this bit of code which results in a deadlock and I'm not certain why. I have tried using mutex locking in a few different places, closing channels in and outside of separate go routines, but the result is still the same.
I'm trying to send data through one channel (inputChan), and then read it from another (outputChan)
package main
import (
"fmt"
"sync"
)
func listStuff(wg *sync.WaitGroup, workerID int, inputChan chan int, outputChan chan int) {
defer wg.Done()
for i := range inputChan {
fmt.Println("sending ", i)
outputChan <- i
}
}
func List(workers int) ([]int, error) {
_output := make([]int, 0)
inputChan := make(chan int, 1000)
outputChan := make(chan int, 1000)
var wg sync.WaitGroup
wg.Add(workers)
fmt.Printf("+++ Spinning up %v workers\n", workers)
for i := 0; i < workers; i++ {
go listStuff(&wg, i, inputChan, outputChan)
}
for i := 0; i < 3000; i++ {
inputChan <- i
}
done := make(chan struct{})
go func() {
close(done)
close(inputChan)
close(outputChan)
wg.Wait()
}()
for o := range outputChan {
fmt.Println("reading from channel...")
_output = append(_output, o)
}
<-done
fmt.Printf("+++ output len: %v\n", len(_output))
return _output, nil
}
func main() {
List(5)
}
| What is the cause of the deadlock in my Go code using WaitGroups and Buffered Channels? | The code in your main function is sequential and first tries to write 3k values into inputChan then will read values from outputChan.
Your code blocks on the first of those steps:
nothing drains from outputChan before 3k values are succesfully sent to inputChan, so the workers end up stuck on outputChan <- i after the first 1k value
once the workers stop draining from inputChan, main will get stuck on inputChan <- i after ~2k values
One way to fix this can be to have the producer (inputChan <- i) and the end consumer (for o := range outputChan {) run in separate goroutines.
You can keep one of these actors in the main goroutine, and spin a new one for the other. For example :
go func(inputChan chan<- int){
for i := 0; i < 3000; i++ {
inputChan <- i
}
close(inputChan)
}(inputChan)
done := make(chan struct{})
go func() {
close(done)
// close(inputChan) // I chose to close inputChan above, don't close it twice
close(outputChan)
wg.Wait()
}()
...
https://go.dev/play/p/doBgfkAbyaO
one extra note: the order of actions around signaling done is important ; channels done and outputChan should only be closed after wg.Done() indicates that all workers are finished
// it is best to close inputChan next to the code that controls
// when its input is complete.
close(inputChan)
// If you had several producers writing to the same channel, you
// would probably have to add a separate waitgroup to handle closing,
// much like you did for your workers
go func() {
wg.Wait()
// the two following actions must happen *after* workers have
// completed
close(done)
close(outputChan)
}()
|
76387085 | 76387252 | we have used mat-select in our project, the panel shows overlapping on the trigger text like version 14 and before (material version 12 was used in our project)
after finding few hacks, the panel showed below the trigger text, but it was not consistent with different screen sizes, specially the mobile view.
We found that from material version 15, the select panel showed below the trigger text -
so we upgraded to material version 15, and also upgraded the angular version to 15. But even after the upgrade the mat-select is still showing in the previous style.
Can someone please suggest what could be going wrong here and what needs to be done to get working like material version 15 mat-select
| mat-select panel still shows overlapped like version 14 and below even after upgrade to angular material 15 | Based on the comments, you have only updated to Angular material 15, but is running in legacy mode.
To fully migrate you need to run.
schematic: ng generate @angular/material:mdc-migration
however due to class name changes to the mdc- prefix etc and structual changes in some of the componets. you should follow their migration guide.
https://rc.material.angular.io/guide/mdc-migration
|
76383522 | 76384890 | That's a result when I receive after trying to run ASP.NET Core 7.0 runtime image in Amazon ECS container (AWS Fargate service).
My project in on ASP.NET Core 7.0 Web API.
Here is a docker file for image which built on Jenkins and sent to Amazon ECS
FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["src/MetaGame/MyMetaGame.API/MyMetaGame.API.csproj", "src/MetaGame/MyMetaGame.API/"]
RUN dotnet restore "src/MetaGame/MyMetaGame.API/MyMetaGame.API.csproj"
COPY . .
WORKDIR "/src/src/MetaGame/MyMetaGame.API"
RUN dotnet build "MyMetaGame.API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyMetaGame.API.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyMetaGame.API.dll"]
After some googling I've found a decision to increase GC Heap limit by environment parameter. I set it to 512Mb
DOTNET_GCHeapHardLimit=20000000
But it hasn't fixed my problem.
No idea what's the problem
Here is the configuration of csproj file
<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<DockerDefaultTargetOS>Linux</DockerDefaultTargetOS>
</PropertyGroup>
| ASP.NET Core 7.0 container: Failed to create CoreCLR, HRESULT: 0x8007000E | So, the problem is fixed by adding
ENV COMPlus_EnableDiagnostics=0
in the final stage of Dockerfile.
So, it should looks like
FROM base AS final
ENV COMPlus_EnableDiagnostics=0
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyMetaGame.API.dll"]
Found the solution here
|
76384192 | 76386662 | I have two virtual machines. First has apache server, and wordpress on virtualhost. Second has mysql server for my wordpress in first vm. Mysql configuration use ip adress of first vm. But when i leave my home and in result leave my home wi-fi, ip adress of first vm is changing when i connect to internet from my phone and my web-site crashes.
| How to make sure that my virtual machine does not change its IP address when connecting to different networks? | It is said that by default IP addresses are assigned dynamically, so the IP changes if you are in different network (that is, when you access the Internet through mobile data instead of home wifi). You may assign static IP address to your VM. This is done from inside of your VM.
|
76384852 | 76386672 | How do you change the size of the text in the TextComponent in Flame? I want to make the text larger
I used the Flame documentation to get this, but I don't know how to modify it for a larger text size (like 20pt vs 14pt). Also, what is the anchor?
final style = TextStyle(color: BasicPalette.darkBlue.color);
final regular = TextPaint(style: style);
TextComponent startText = TextComponent(text: 'test text', textRenderer: regular)
..anchor = Anchor.topCenter
..x = (width * 0.2)
..y = (height - (height*0.5))
..priority = 300;
| How can I change the size of the text in TextComponent using Flutter Flame? | final style = TextStyle(
color: BasicPalette.darkBlue.color,
fontSize: 20.0, // Change the font size here
);
final regular = TextPaint(style: style);
TextComponent startText = TextComponent(
text: 'test text',
textRenderer: regular,
)
..anchor = Anchor.topCenter
..x = (width * 0.2)
..y = (height - (height * 0.5))
..priority = 300;
What's anchor?
The anchor in Flame's TextComponent determines where the text is positioned relative to its given coordinates (x and y). You can choose from options like top-left, top-center, top-right, center-left, center, center-right, bottom-left, bottom-center, and bottom-right to align the text as you want. Adjust the anchor and position values to position the text as needed.
|
76384267 | 76386693 | I'm using NPM install API cache for my MERN application. I want one of my requests to be cached and then when a request is made to another route it refreshes the cache (it fetches the new data). I've managed to get one of my requests to be cached but I can't manage to clear it.
This is my code
const apicache = require("apicache")
let cache = apicache.middleware
app.get("/api/users/checkJWT",cache("2 minutes"), async (req,res) => {
const token = req.headers.authorization?.split(" ")[1]
const decoded = jwt.verify(token,SECRET_KEY)
if(decoded != null && await User.findById(decoded.id) != null) {
return res.status(200).json({valid: true, user: await User.findById(decoded.id)})
}
else {
return res.status(400).json({valid: false})
}
})
app.patch("/api/users/user",authenticateJWTUser, async (req, res) => {
apicache.clear('/api/users/checkJWT')
const user = await User.findOne({ name: req.body.username });
if (req.body.name != null) {
user.name = req.body.name;
}
if (req.body.email != null) {
user.email = req.body.email;
}
if (req.body.password != null) {
user.password = await bcrypt.hash(req.body.password, 10);
}
if (req.body.tasks != null) {
user.tasks = req.body.tasks;
}
try {
const updatedUser = await user.save();
res.json(updatedUser);
} catch (error) {
res.status(400).json({ message: error.message });
}
});
Link to my Git repo: https://github.com/JesseOgunlaja/Task-Tracker-MERN/tree/ae1b0027bfa40f90ab7a3d6208c05a9eb41a4478
| How to clear cache for a route with the 'apicache' package | I was able to get this work using the apicache-plus package.
const apicache = require("apicache-plus");
router.get(
"/api/users/getName/1",
apicache.middleware("10 minutes"),
async (req, res, next) => {
req.apicacheGroup = "toClear";
const someData = { someName: "Amy" };
res.json(someData);
}
);
router.get("/api/users/user/1/2", async (req, res, next) => {
console.log("something");
apicache.clear("toClear");
const user = { name: "Jeff" };
res.json(user);
});
|
76387202 | 76387264 | I'm facing an issue with razor page navigation. I'm in the https://localhost:7154/Application/setupAccount page and i have to redirect to https://localhost:7154/Loan/setupLoan page.
So I used RedirectToPage("setupLoan", new{id=123}. but the issue is that it's looking like a setupLoan page inside the Application folder. ( URL - https://localhost:7154/Application/setupLoan )
Then I'm getting an error like page not found
I can use redirect functionality. But the thing is, I can't send dynamic query parameters.
So let me know if anyone know a correct path to navigate razor page into another folder
| Razor page redirect to another folder page | Try:
return RedirectToPage("setupLoan", new { area = "Application",id=1 });
result:
You can read Areas with Razor Pages to know more.
Update
return RedirectToPage("setupLoan", new { area = "Loan",id=1 });
Update 2
Loan is a page folder. not area folder ,
try:
return RedirectToPage("/Loan/setupLoan", new { id = 1 });
|
76383257 | 76384992 | I'm developing a simple message server app, using WebSockets and NestJs gateways. Server receives an event to indentify a listener with name, and then sends websocket updates to that listener.
Right now I'm using subject/observable approach, server after listen-as event returns a new observable, which filters all upcoming messages:
export class ListenAsDto {
name: string;
}
export interface PersonalisedMessage {
from: string;
title: string;
content: string;
}
export class MessagesService {
// ...
listenAs(listener: ListenAsDto): Observable<WsResponse<PersonalisedMessage>> {
return this.messageObservable.pipe(
filter((item) => item.to === listener.name),
map((item) => {
const copy = { ...item };
delete copy.to;
return { event: INBOX_MESSAGE_NAME, data: copy };
}),
);
}
}
But this only works for the first time, after another request client now receives both updates from 2 users at the time.
Is there a way to stop previous observable for current user an start new?
EDIT: I'm using ws platform
| Is there a way to stop old observable and set new after event in Nest Gateways? | Found a solution by using a native WebSocket instance. It passes the same object if request was made by same client. So, using a WeakMap, we can manually track all subscriptions objects, and unsubscribe them, if we get the same WebSocket client
export class MessagesService {
private subs = new WeakMap<WebSocket, Subscription>();
listenAs(ip: WebSocket, dto: ListenAsDto) {
if (this.subs.has(ip)) this.subs.get(ip).unsubscribe();
// 'createObservable' is basically question`s 'listenAs' function
const obs = this.createObservable(dto);
this.subs.set(
ip,
obs.subscribe({
next(value) {
ip.send(JSON.stringify(value));
},
}),
);
return dto;
}
}
|
76385199 | 76386831 | OK I have set up jupyter notebook on a gcp VM, I want it to start automatically every time VM is started with a predefined token. (OS in Centos)
Does any one have any idea how to solve this final issue so that we can get the jupyter notebook started under the user test1 , everytime the VM starts ?
Here is what I have done so far
Created a local test1 user and created a Virtual env for it, installed jupyter notebook inside the virtual env for isolation of python
pyton3 -m venv tvenv
source tvenv\bin\activate
pip install jupyter
Understood that I would need to generate config and update token / ip / port etc inside the config becuase dynamically generated token cannot be used
jupyter notebook --generate-config
modified NotebookApp.token/NotebookApp.port/NotebookApp.ip etc attributes to get that to work.
based on an existing question created a shell script to be added to the start up to the gcp VM
sudo -u test1 bash -c 'cd ~/; nohup tvenv/bin/jupyter-notebook &'
When i run the command manually from root it works when I update in GCP startup it constantly fails saying
Account or password is expired, reset your password and try again
Note : Helpful debugging tip. You can get the startup script logs by running below command once inside the VM
sudo journalctl -u google-startup-scripts.service
| How to start Jupyter Notebook on a GCP VM with predefined token automatically on VM start? | As John has already pointed out to you in the comment, sudo is not a good option, since the Startup scripts run as root, su can be used to switch to another profile.
Few other things that you should be aware of is that as you are using virtual enviornment, you have to set up similar enviornment variables as 'activate'
does.
one possible solution that you can try is below
su -c 'PATH="/home/test1/tvenv/bin:$PATH";export PATH; cd ~/; nohup tvenv/bin/jupyter-notebook &' -s /bin/sh test1
What it does is sets the path to the virtual enviornment ( activity that is done via the activate among other tasks ) moves to home directory of the user and runs the notebook in background with test1 user profile.
I would also suggest that you use shutdown script in the gcp vm to shutdown notebook gracefully, make sure that you replace the port with the port that you are using in your config
jupyter notebook stop 8888
Similar to start up you can check the shutdown logs by using the command below
sudo journalctl -u google-shutdown-scripts.service
Let me know if you still face any issues.
|
76383937 | 76388079 | When I'm creating a new project in PyCharm, I can see some environments I created for old projects that have been deleted:
but I can't find a way to delete these entries. Where are they? How do I delete them?
I tried going to "Python Interpreters" following https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html#removing_interpreter but they are just not there:
| How do I remove deleted Python environments in PyCharm? | The Interpreters list from the New Project dialogue has a different entry point from the project specific list. Go to File > New Projects Setup > Settings for New Projects (instead of going to File > Settings as usual):
Going through the other entry point you can already notice the Python Interpreter item in the sidebar is slightly different visually from if you had gone the other route. From there on the renaming dialogues are the same as usual but the lists are different. Continue to Python Interpreter > Show All... at the bottom of the drop down list:
|
76387278 | 76387294 | In SwiftUI,
Text("a \n b \n c \n d \n e")
.lineLimit(3)
In SwiftUI, the above code shows output including 3 dots in the end.
Output:
a
b
c...
But my target is to show the output without dots like this -
Target:
a
b
c
| How to remove 3 dots after end of line limit | implement the following way.
Text("1\n 2 \n 3 \n 4 \n 5".truncateToLineLimit(3))
extension String {
func truncateToLineLimit(_ lineLimit: Int) -> String {
var truncatedString = ""
let lines = self.components(separatedBy: "\n").prefix(lineLimit)
for line in lines {
truncatedString += line.trimmingCharacters(in: .whitespacesAndNewlines)
if line != lines.last {
truncatedString += "\n"
}
}
return truncatedString
}
}
|
76382914 | 76385022 | I have three frames; left_frame, middle_frame and right_frame. I would like to have no gap between left_frame and its scrollbar. How can I achieve it?
import tkinter as tk
from tkinter import ttk
def create_dummy_chart(frame, index):
# Create a dummy chart
label = tk.Label(frame, text=f"Chart {index}", width=10, height=2, relief="solid")
label.pack(side="top", pady=5)
# Create the main window
window = tk.Tk()
window.title("Chart Display")
window.geometry("800x300")
# Create the left frame
left_frame = tk.Frame(window, width=200, height=300, relief="solid", bd=1)
left_frame.pack(side="left", fill="y")
# Create a canvas to hold the left_frame and the scrollbar
canvas = tk.Canvas(left_frame, width=200, height=300)
canvas.pack(side="left", fill="y")
# Create the scrollbar
scrollbar = ttk.Scrollbar(left_frame, orient="vertical", command=canvas.yview)
scrollbar.pack(side="right", fill="y")
# Configure the canvas to use the scrollbar
canvas.configure(yscrollcommand=scrollbar.set)
canvas.bind("<Configure>", lambda e: canvas.configure(scrollregion=canvas.bbox("all")))
# Create a frame inside the canvas to hold the charts
chart_frame = tk.Frame(canvas)
# Add the frame to the canvas
canvas.create_window((0, 0), window=chart_frame, anchor="nw")
# Create dummy charts in the chart_frame
num_charts = 200
for i in range(num_charts):
create_dummy_chart(chart_frame, i)
# Update the canvas scrollable area
chart_frame.update_idletasks()
canvas.configure(scrollregion=canvas.bbox("all"))
# Create the middle frame
middle_frame = tk.Frame(window, width=200, height=300, relief="solid", bd=1)
middle_frame.pack(side="left", fill="y")
# Create the right frame
right_frame = tk.Frame(window, width=400, height=300, relief="solid", bd=1)
right_frame.pack(side="left", fill="both", expand=True)
# Run the Tkinter event loop
window.mainloop()
I have left_frame and vertical scrollabr. But, I ended up having a gap between them. I expect to have no gap between them. How can I achieve this?
What I have:
What I expect:
| How to remove gap between a frame and its scrollbar? | If you're referring to the one or two pixel space between the canvas and the scrollbar, you can set the highlightthickness attribute of the canvas to zero, or you can set the highlightcolor to the same color as the background of the canvas so that the highlight ring is not visible when the canvas has the keyboard focus.
You might also need to explicitly set the padx value to zero when packing the canvas and scrollbar. On some platforms it might default to 1.
If you're referring to the fact that the canvas is wider than the items in the canvas, then you can set up a binding to set the width of the canvas to always match the width of the items on the canvas.
You can do that like this:
chart_frame.bind("<Configure>", lambda event: canvas.configure(width=chart_frame.winfo_width()))
|
76385290 | 76388158 | I looked inside the the built in WebApplicationBuilder class and noticed the
public ILoggingBuilder Logging { get; }
property. Here is the entire ILoggingBuilder interface:
public interface ILoggingBuilder
{
IServiceCollection Services { get; }
}
It just stores a single property so what is the point of this interface? Couldn't the WebApplicationBuilder just store an instance of IServiceCollection directly?
| What is the point of the ILoggingBuilder interface in ASP.NET Core? | It's about scoping and reducing intellisense hell. If everything was on IServiceCollection it would get cluttered quickly. Subsystems that are highly configurable and are themselves extensible need a "target" to extend. This is one of the patterns the platform employs to solve this problem.
Some of the patterns you'll see:
Simple options via a callback:
services.AddComponent((ComponentOptions options) =>
{
options.SomeFlag = true;
});
This pattern works well when the number of options on component is small, self-contained and has limited extensibility.
Complex builder returned from the API:
IComponentBuilder builder = services.AddComponent();
builder.AddExtensibleThing1();
builder.AddExtensibleThing2();
Breaks the fluent API pattern but reduces nested lambdas.
OR
services.AddComponent((IComponentBuilder builder) =>
{
builder.AddExtensibleThing1();
builder.AddExtensibleThing2();
});
Similar to the above but nested instead of a return value. Keeps the fluent API working at the cost of more callbacks.
|
76387293 | 76387324 | I'm added i18next React for old project. So many text not can be translate. How can I search all of them in VSCode?
Some case:
<Button className="w-full" onClick={onClick}>
Save
<Button>
<Button
type="primary"
onClick={onCLick}
className="ml-2"
>
Save
</Button>
<Button>Save</Button>
<ContentWrapper title="Connection" headerAction={<Button size="small">Add</Button>}>
<div>{`Connection`}</div>
</ContentWrapper>
<p className="mt-2 text-center">
Drag and drop files here or <span className="text-blue-500">{t('browse')}</span>
</p>
<div>
{mediaRecorder?.state === 'recording' ? (
<Button onClick={handleStop}>{t('Stop')}</Button>
) : (
<Button onClick={handleStart}>Start</Button>
)}
</div>
Translated:
<Button>{t('Save')}</Button>
<ContentWrapper title="Connection" headerAction={<Button size="small">{t('Add')}</Button>}>
<div>{t('Connection')}</div>
</ContentWrapper>
| How to seach text not translate in VSCode? | Find:
(<Button[\s\S\n]*?>[\n\s]*)(\w+)([\n\s]*</Button>)
Replace:
$1{t('$2')}$3
|
76384368 | 76388195 | Bokeh scatter plot disappears when checkbox is added to layout.
If I exclude layout = row([checkbox_group, p]) and do show(p), I get the intended scatter plot with the color bar.
But when I include layout = row([checkbox_group, p]) and do show(layout), the scatter plot disappears whereas the checkbox and color bar appear.
import pandas as pd
import numpy as np
from bokeh.plotting import figure, show, curdoc
from bokeh.models import ColumnDataSource, ColorBar, HoverTool, CustomJS
from bokeh.models.widgets import CheckboxGroup
from bokeh.transform import linear_cmap
from bokeh.palettes import Iridescent18
from bokeh.models.mappers import LinearColorMapper
from bokeh.layouts import row
gene_list = ['A', 'B', 'C']
file_paths = [r"home/File1.feather", r"home/File2.feather"]
checkbox_group = CheckboxGroup(labels=['File 1', 'File 2'], active = [0])
def checkbox_change(attr, old, new):
if new:
selected_file_index = new[0]
if selected_file_index >= len(file_paths):
print("Selected file index is out of range.")
return
selected_file_path = file_paths[selected_file_index]
df = pd.read_feather(selected_file_path, columns=gene_list + ['umap1', 'umap2', 'index'])
df = df.replace({'index' : 'cell_type'})
median_score = []
for idx, r in df.iterrows():
score = np.sum(r[gene_list])
median_score.append(score)
df['score'] = median_score
source.data = df.to_dict(orient='list')
checkbox_group.on_change('active', checkbox_change)
# Create the initial plot
df = pd.read_feather(file_paths[0], columns=gene_list + ['umap1', 'umap2', 'index'])
median_score = []
for idx, r in df.iterrows():
score = np.sum(r[gene_list])
median_score.append(score)
df['score'] = median_score
source = ColumnDataSource(df)
mapper = linear_cmap(
field_name='score', palette=Iridescent18, low=df['score'].min(), high=df['score'].max()
)
p = figure(
title='UMAP Visualization',
x_axis_label='umap1',
y_axis_label='umap2',
sizing_mode='stretch_width',
height=1500,
toolbar_location='above'
)
hover = HoverTool(tooltips=[('Cell Name', '@index')], mode='mouse')
p.add_tools(hover)
p.scatter("umap1", "umap2", color=mapper, source=source)
color_mapper = LinearColorMapper(
palette=Iridescent18, low=df['score'].min(), high=df['score'].max()
)
color_bar = ColorBar(
color_mapper=color_mapper, label_standoff=12, location=(0, 0), title='Score'
)
p.add_layout(color_bar, 'right')
layout = row([checkbox_group, p])
show(layout) #show(p)
| Bokeh plot is missing in layout when checkbox group is used | Your problem comes from the selected sizing_modes in the figure and the row function calls.
Your are setting the sizing_mode of the figure to stretch_width and you are using the default sizing_mode of the row-layout which is fixed. This leads to a behavoir where the figure is shrunk to a minimal width of 0.
To fix this, you can
set the sizing_mode of the row to stretch_width or
set the figure to a fixed width
Minimal Example
source = ColumnDataSource(dict(A=[1,2,3], B=[1,2,3], score=[2,4,6]))
p = figure(sizing_mode='stretch_width')
mapper = linear_cmap(
field_name='score', palette=Iridescent18, low=df['score'].min(), high=df['score'].max()
)
p.scatter("A", "B", color=mapper, source=source)
checkbox_group = CheckboxGroup(labels=['Checkbox 1', 'Checkbox 2'], active = [0])
color_mapper = LinearColorMapper(palette=Iridescent18, low=1, high=3)
color_bar = ColorBar(
color_mapper=color_mapper, label_standoff=12, location=(0, 0), title='Score'
)
p.add_layout(color_bar, 'right')
layout = row([checkbox_group, p], sizing_mode='stretch_width')
show(layout)
Output
Comment
The output was created with bokeh 3.1.1. I am not sure if this will work for older versions.
|
76378492 | 76385025 | There is a C function that gets the acceleration values of x,y,z of an accelerometer sensor (MEMS IMU) as input, and calculates the rotation matrix in a way that the z axis is aligned with the gravity. It is being used for calibrating the accelerometer data.
#define X_AXIS (0u)
#define Y_AXIS (1u)
#define Z_AXIS (2u)
static float matrix[3][3];
void calculate_rotation_matrix(float raw_x, float raw_y, float raw_z)
{
const float r = sqrtf(raw_x * raw_x + raw_y * raw_y + raw_z * raw_z);
const float x = raw_x / r;
const float y = raw_y / r;
const float z = raw_z / r;
const float x2 = x * x;
const float y2 = y * y;
matrix[X_AXIS][X_AXIS] = (y2 - (x2 * z)) / (x2 + y2);
matrix[X_AXIS][Y_AXIS] = ((-x * y) - (x * y * z)) / (x2 + y2);
matrix[X_AXIS][Z_AXIS] = x;
matrix[Y_AXIS][X_AXIS] = ((-x * y) - (x * y * z)) / (x2 + y2);
matrix[Y_AXIS][Y_AXIS] = (x2 - (y2 * z)) / (x2 + y2);
matrix[Y_AXIS][Z_AXIS] = y;
matrix[Z_AXIS][X_AXIS] = -x;
matrix[Z_AXIS][Y_AXIS] = -y;
matrix[Z_AXIS][Z_AXIS] = -z;
}
float result[3];
void apply_rotation(float x, float y, float z)
{
result[AXIS_X] = matrix[X_AXIS][X_AXIS] * x
+ matrix[X_AXIS][Y_AXIS] * y
+ matrix[X_AXIS][Z_AXIS] * z;
result[AXIS_Y] = matrix[Y_AXIS][X_AXIS] * x
+ matrix[Y_AXIS][Y_AXIS] * y
+ matrix[Y_AXIS][Z_AXIS] * z;
result[AXIS_Z] = matrix[Z_AXIS][X_AXIS] * x
+ matrix[Z_AXIS][Y_AXIS] * y
+ matrix[Z_AXIS][Z_AXIS] * z;
}
I'm trying to wrap my head around how it works and why there is no use of trigonometric functions here?
is it just simplifying the trigonometric functions by normalizing the input values and using the equivalent equations to calculate the trigonometric functions?
What are the limitations of this method? for example when the denominator calculated is zero, we will have division by zero. Anything else?
Tried to search on the internet and stackoverflow, but couldn't find a similar method to calculate the rotation matrix.
UPDATE:
Just simplified the calculations so they are more readable.
To add more context this code is used to rotate the readings of an accelerometer in a way that regardless of the orientation of the device, the z-axis is perpendicular to the ground.
The calculate_rotation_matrix() is called when we know that the object is stationary and is on a flat surface. This results in calculating the 3x3 matrix. Then the apply_rotation() is used to rotate subsequent readings.
| How to calculate rotation matrix for an accelerometer using only basic algebraic operations | A quaternion representation can apply rotations without trig functions. But this appears to be a version of: https://math.stackexchange.com/a/476311 . The math appears to be a variation thereof, where "a" and "b" are the accelerometer and gravity vectors.
The method also appears to assume measurements will not be perfect. And MEMS sensors fit that description. Otherwise, as you stated, if x and y are both zero then you have a divide by zero condition.
|
76384085 | 76388372 | What would be the correct return value of my setMyBirthday function in the HomeRepository class for the code to work? Void doesn't seem to be the right return value.
Fragment:
private fun HomeViewModel.setupObserver() {
myBirthday.observe(viewLifecycleOwner) { response ->
when(response) {
is Resource.Error -> {
response.message?.let { message ->
Log.e(TAG, "An error occurred: $message")
}
}
is Resource.Loading -> {
binding.buttonChangeBirthday.isClickable = false
}
is Resource.Success -> {
binding.buttonChangeBirthday.isClickable = true
}
}
}
}
HomeViewModel:
val myBirthday: MutableLiveData<Resource<Void>> = MutableLiveData()
fun setMyBirthday(birthday: String) = viewModelScope.launch {
try {
myBirthday.postValue(Resource.Loading())
val response = homeRepository.setMyBirthday(birthday)
myBirthday.postValue(Resource.Success(response))
} catch (e: Exception) {
myBirthday.postValue(Resource.Error(e.message!!))
}
}
HomeRepository:
suspend fun setMyBirthday(birthday: String) =
databaseReference
.child("users")
.child(myUserID)
.child("birthday")
.setValue(birthday)
.await()
| Kotlin Firebase Realtime Database correct return value for setValue() in MVVM |
What would be the correct return value of my function in the HomeRepository class for the code to work? Void doesn't seem to be the right return value.
Indeed Void is the type of object the setMyBirthday function returns. So your function should look like this:
// 👇
suspend fun setMyBirthday(birthday: String): Void =
databaseReference
.child("users")
.child(myUserID)
.child("birthday")
.setValue(birthday)
.await()
While your approach will work, it will be more convenient to return a Resource<Boolean> as in the following lines of code:
suspend fun setMyBirthday(birthday: String): Resource<Boolean> = try {
databaseReference
.child("users")
.child(myUserID)
.child("birthday")
.setValue(birthday)
.await()
Resource.Success(true)
} catch (e: Exception) {
Resource.Error(e)
}
This is more useful because you can pass the error further, otherwise, you won't know if something fails.
|
76387171 | 76387347 | Problem: I am trying to scrape the image source locations for pictures on a website, but I cannot get Beautiful Soup to scrape them successfully.
Details:
Here is the website
The three images I want have the following HTML tags:
<img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-1.jpg" style="display: none;">
<img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-2.jpg" style="display: none;">
<img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-3.jpg" style="display: none;">
Code I've Tried:
soup.find_all('img')
soup.select('#imageFlicker')
soup.select('#imageFlicker > div')
soup.select('#imageFlicker > div > img:nth-child(1)')
soup.find_all('div', {'class':'exercise-post__step-image-wrap'})
soup.find_all('div', attrs={'id': 'imageFlicker'})
soup.select_all('#imageFlicker > div > img:nth-child(1)')
The very first query of soup.find_all('img') gets every image on the page except the three images I want. I've tried looking at the children and sub children of each of the above, and none of that works either.
What am I missing here? I think there may be javascript that is changing the css display attribute from block to none and back so the three images look like a gif instead of three different images. Is that messing things up in a way I'm not understanding? Thank you!
| Beautiful Soup Img Src Scrape | The content is provided dynmaically via JavaScript, but not rendered by requests per se, unlike in the browser.
However, you can search for the JavaScript variable:
var data = {"images":["https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-1.jpg","https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-2.jpg","https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-3.jpg"],"interval":600};
with regex re.search() and convert its content string with json.loads() to JSON, so that you can access it easily.
Example
import requests
import re, json
url = 'https://www.acefitness.org/resources/everyone/exercise-library/14/bird-dog/'
json.loads(re.search(r'var data = (.*?);', requests.get(url).text).group(1))['images']
|
76383602 | 76385062 | I have an HTML image that I want to pan & zoom programatically. In order to improve the experience, I added a CSS transition for smoothness.
When I click on the image, I need to determine the mouse position within the image. Currently, event.offsetX gives me the mouse position within the image at the current frame of the animation.
I would like to know the mouse position as if the animation finished already (even though it hadn't).
Here is another way of explaining the problem. I created below an example where an image zooms in after we click on the button. The zoom takes 5 seconds. During the zoom, if I keep my mouse fixed and keep clicking, the offset value changes as the image moves on the screen and then it stabilizes after 5 seconds and returns the same offset value. I would like to know that final offset value before the animation finishes if possible.
<button onclick="onButton()">Zoom</button>
<br>
<img
id='img'
onclick="onImage(event)"
src="https://picsum.photos/id/237/200"
style="transition: 5s ease"
>
<script>
function onButton() {
const img = document.querySelector('#img')
img.style.scale = 5.0
}
function onImage(event) {
console.log(event.offsetX, event.offsetY)
}
</script>
| Javascript event.offset alternative for final value during CSS scale & transition | If you don't mind adding some additional HTML and CSS:
function onButton() {
const img = document.querySelector("#img");
img.style.scale = 5.0;
const target = document.querySelector("#target");
target.style.scale = 5.0;// make sure target's style matches img's style
}
function onImage(event) {
console.log(event.offsetX, event.offsetY);
}
#wrapper {
display: inline-block;
position:relative;
font-size:0;
}
#target{
position:absolute;
width:100%;
height:100%;
}
#img {
transition: 5s ease;
z-index:2;
pointer-events:none;
}
<button onclick="onButton()">Zoom</button>
<br>
<div id="wrapper">
<div id="target" onclick="onImage(event)"></div>
<img id="img" src="https://picsum.photos/id/237/200">
</div>
This work by zooming an invisible div behind the image without transition, and transfer the click event to that div instead of the image itself
|
76383390 | 76385275 | I have a VStack like this:
var body: some View{
VStack{
//some view
}
.background(
ZStack{
LinearGradient(gradient: Gradient(
colors: [
.yellow_500,
.yellow_500,
.yellow_500,
.yellow_50
]), startPoint: .top, endPoint: .bottom)
.ignoresSafeArea(.container, edges: [.top])
Image(R.image.home_bg_try.name)
.resizable()
.scaledToFill()
.frame(maxWidth: .infinity, maxHeight: .infinity)
.ignoresSafeArea(.container, edges: [.top])
}
// .offset(y: -20)
)
}
I want the image to be displayed without considering the safe area, which currently results in a white space appearing at the top, as shown in this image: (https://i.stack.imgur.com/knRdm.png).
I attempted to fix this issue by adding an offset, but I noticed that the height of the white space varies across different models.
How can I resolve this problem?
| How to ignore safe area for a background with a linear gradient and image in swiftUI? | Your current setup has the right idea, but if you want to completely ignore the safe area for the VStack's background, you should apply the .ignoresSafeArea() modifier on each of the backgrounds.
The white space you are seeing might be a result of the VStack not filling up the whole screen.
Try moving .ignoresSafeArea() modifier outside of the ZStack, and apply it to the whole VStack. Here is the updated code:
var body: some View {
VStack{
//some view
}
.background(
ZStack {
LinearGradient(gradient: Gradient(
colors: [
.yellow_500,
.yellow_500,
.yellow_500,
.yellow_50
]), startPoint: .top, endPoint: .bottom)
.resizable()
.aspectRatio(contentMode: .fill)
Image(R.image.home_bg_try.name)
.resizable()
.scaledToFill()
.frame(maxWidth: .infinity, maxHeight: .infinity)
}
)
.ignoresSafeArea()
}
By using .ignoresSafeArea(), you instruct SwiftUI to layout the VStack and its background across the whole screen, including under the status bar and home indicator on iPhone models with edge-to-edge screens.
Additionally, you might want to apply .edgesIgnoringSafeArea(.all) instead of .ignoresSafeArea() in certain situations. For example, if your view doesn't play well with safe area insets when it's embedded in another view hierarchy, you might find .edgesIgnoringSafeArea(.all) works better. However, in SwiftUI 2.0 and later, .ignoresSafeArea() is generally recommended.
|
76387297 | 76387372 | How to conditionally assign a value to "ingList" based on the value of recipeList?
class Calculator extends StatefulWidget {
List<IngredientList> recipeList;
Calculator(this.recipeList, {Key? key}) : super(key: key);
@override
State<Calculator> createState() => _CalculatorState();
}
class _CalculatorState extends State<Calculator> {
List<IngredientList> ingList = [];
| How to conditionally assign a value to Flutter List | You can use initState to do that
class Calculator extends StatefulWidget {
List<IngredientList> recipeList;
Calculator(this.recipeList, {Key? key}) : super(key: key);
@override
State<Calculator> createState() => _CalculatorState();
}
class _CalculatorState extends State<Calculator> {
List<IngredientList> ingList = [];
@override
void initState() {
super.initState();
ingList = widget.recipeList; // you can do any condition here
}
}
|
76384547 | 76388486 | I am learning Rust. I am working on an embedded Rust project to interface with an I2C LED driver.
I have defined a pub enum LedRegister that defines all of the I2C registers used to control each LED. These register definitions are always one byte, they set by the chip's datasheet, and they will never change. However, I like having them in an enum, because it allows me to create a function that will only accept LedRegisters as inputs.
pub enum LedRegister {
// 0-255 PWM value. 255 = max LED brightness.
CpuStatusR = 0x03,
CpuStatusG = 0x02,
CpuStatusB = 0x01,
Ch1Cor = 0x04,
Ch1Ptt = 0x06,
Ch1SpareR = 0x05,
Ch1StatusR = 0x0C,
Ch1StatusG = 0x0B,
Ch1StatusB = 0x0A,
Ch2Cor = 0x07,
Ch2Ptt = 0x09,
Ch2SpareR = 0x08,
Ch2StatusR = 0x0E,
Ch2StatusG = 0x0D,
Ch2StatusB = 0x0F,
SpareLedR = 0x12,
SpareLedG = 0x11,
SpareLedB = 0x10,
}
In the same scope, I have a pub fn enable_led to toggle each LED on or off:
pub fn enable_led(i2c: &mut I2c, led: LedRegister, state: bool) -> Result<(), Box<dyn Error>> {
if state {
i2c.write(&[led as u8, 0xFF])?;
} else {
i2c.write(&[led as u8, 0x00])?;
}
i2c.write(&[ControlRegister::Update as u8, 0x00])?;
Ok(())
}
From main, I can call this function and see that my LED turns on:
led_driver::enable_led(&mut i2c, led_driver::LedRegister::SpareLedG, true);
I would like to write another function that allows me to blink the LED by calling enable_led multiple times with a delay in between:
pub fn blink_led(i2c: &mut I2c, led: LedRegister, duration_ms: u64) -> Result<(), Box<dyn Error>> {
enable_led(i2c, led, true);
// add a delay
enable_led(i2c, led, false);
// add a delay
Ok(())
}
Here, Rust complains because the value of 'led' has moved into the first call of enable_led. I understand that there can only be one owner of the value - so I get why Rust is complaining.
I have attempted to 'borrow' the value of LedRegister:
pub fn enable_led(i2c: &mut I2c, led: &LedRegister, state: bool) -> Result<(), Box<dyn Error>> {
if state {
i2c.write(&[led as u8, 0xFF])?;
} else {
i2c.write(&[led as u8, 0x00])?;
}
i2c.write(&[ControlRegister::Update as u8, 0x00])?;
Ok(())
}
led_driver::enable_led(&mut i2c, &led_driver::LedRegister::SpareLedG, true);
This gives me E0606 and a suggestion to implement casting through a raw pointer and unsafe blocks, which seems needlessly complicated for this use case.
I have also considered implementing copy on my LedRegister, so I can make copies of the initial reference before calling led_enable() with the same LED. This compiles, but it also seems more difficult than it should be. I don't think I actually need multiple copies of my register definition byte in RAM. Is this actually necessary?
I get the feeling that I'm not using the right features of Rust for this task. I am not strongly tied to using an enum here if that is the wrong tool for the job. However, I do want to limit the inputs to enable_led and blink_led so they only accept valid LEDs.
What is the best approach to accomplish my objective here in Rust?
| Passing a variable to a function within another function | Casting a reference to an integer is like casting a pointer to an integer, which gives you the address and not the value inside casted, except it is not allowed and you need to go through a raw pointer explicitly. This doesn't require unsafe (led as *const LedRegister as u8), but this also doesn't do what you want.
The correct solution is to implement Copy for your enum. You should always implement Copy for simple enums, unless there is a strong reason not to. Taking references to the value is not in any way better than copying it: it is slower, requiring dereferences and storing in the stack instead of potentially in registers, and it complicates the source code for no reason.
|
76383164 | 76385416 |
.App {
font-family: sans-serif;
text-align: center;
}
.section-01 {
position: relative;
z-index: 0;
background-color: red;
color: white;
padding: 2rem;
}
.section-02 {
position: fixed;
z-index: 1;
background-color: blue;
color: white;
padding: 1rem;
top: 25vh;
left: 30vh;
}
.div-01 {
z-index: 2;
position: absolute;
background-color: purple;
padding: 1rem;
top: 15vh;
left: 25vh;
}
<section class="section-01">
Section-01
<div class="div-01">Div-01</div>
</section>
<section class="section-02">
<div>Section - 02</div>
</section>
I have a component like this:
<section id='section-01' style={{zIndex: 0}}>
<div id='div-01' style={{zIndex: 2}}>Hello World</div>
</section>
<section id='section-02' style={{zIndex: 1}}>
</section>
Now my problem is that due to absolute positioning of #section-02 and #div-01, #section-02 takes the precedence and shows above #div-01. However I don't want that, I want #div-01 to show up above #section-02 and #section-02 show up above #section-01.
Please note that I don't have access to alter code of #section-01 or #section-02. I can only control the code of #div-01.
I know that if parent element has lower z-index then its child won't be able to show above any other entity in hierarchy of parent element and so no matter how high I set the value of div-01 it will always show below section-02 so I wanted to know how can I solve this problem?
Adding code sandbox for example: https://codesandbox.io/s/ecstatic-flower-u5plui?file=/src/styles.css
Here's how it shows atm:
And here's how I want it to be:
Please note that you're only allowed to make changes in
.div-01 {
position: relative;
z-index: 2;
position: absolute;
background-color: purple;
padding: 1rem;
top: 6vh;
left: 25vh;
}
or in JSX part (HTML):
<div className="div-01">Div-01</div>
I don't have access to section-01 or section-02.
I cannot restructure the HTML as these comes from different App owners and I have only control to div and its inner content.
| Z Index complexity - How to position a component inside of a div to have a higher z-index value against a component outside to its level? | Objective
Given 2 sibling positioned elements (.section-01 and .section-02), of which the first element sits at a lower stacking context than the second element. Place the child positioned element of the first element (.section-01 > .div-01) above the second element (.section-02) in the z-axis.
Criterias
Only the child element of the first element (.section-01 > .div-01) may be modified by CSS (apparently by stylesheet?) and HTML (by JSX). The 2 sibling elements are generated by an app and for some reason are beyond reach (please read the XY problem).
Taking the aforementioned criterias into consideration, we cannot:
add, remove, or modify HTML (.div-01 is the exception)
add, remove, or modify CSS (.div-01 is the exception)
Problem
There are two stacking contexts:
.section-01 sits at z-index: 0 without any extended positioning (top, right, bottom, and left) nor does it reference any other positioned elements (position: relative).
.section-02 sits above everything else at z-index: 1 and it's position references the viewport (top, left, and position: fixed).
Naturally #2 stacking context will occupy the foreground. Even if .section-02 had z-index: 0 it would still be in the foreground because in HTML layout it proceeds .section-01. The only way to affect stacking context is to change the element that started it (ie. .section-01 and .section-02).
Solutions
If we could just resolve this issue without having to consider the impossible restrictions mentioned under Criterias, we can simply place .section-01 back into the normal flow of the document by assigning it position: static (or removing position: all together), thereby removing it as an origin of a stacking context and allowing it's child element, .div-01 to be the origin of it's own stacking context. (see Figure I).
Figure I
.section-01 {
position: static;
/* As it was originally, there was no real reason for it to have any `position`
property */;
}
.div-01 {
position: fixed /* or `absolute` */;
z-index: 2;
/* ... */
}
Putting aside my doubts about the criterias being a genuine concern, (maybe it's too difficult to target the app's HTML/CSS because it assigns randomly determined #ids or .classes), there is a solution with the same result but in an indirect way.
Since the only element in our complete control is .div-01, we can target it's parent by using the .has() pseudo-class. (see Figure II)/
Figure II
/* This will target anything that is a direct ancestor (aka parent) of
.div-01 */
:has(> .div-01) {
position: static
}
.div-01 {
position: fixed /* or absolute */;
z-index: 2;
Example
html,
body {
margin: 0;
padding: 0;
min-height: 100vh;
font: 2ch/1.15 "Segoe UI";
}
.section-01 {
position: relative;
z-index: 0;
padding: 2rem;
color: white;
background-color: red;
}
.section-02 {
position: fixed;
top: 10vh;
left: 30vw;
z-index: 1;
Width: 40vw;
height: 30vh;
padding: 6rem 1rem 1rem;
text-align: right;
color: white;
background-color: blue;
}
.div-01 {
position: fixed;
top: 5vh;
left: 25vw;
z-index: 2;
width: 30vw;
height: 30vh;
padding: 1rem;
color: white;
background-color: purple;
}
:has(> .div-01) {
position: static;
}
<section class="section-01">
Section-01
<div class="div-01">Div-01</div>
</section>
<section class="section-02">
<div>Section - 02</div>
</section>
|
76387354 | 76387413 | In PowerBI, I have a managed parameter which is a list of text.
abc, def, ghi.
And that parameter is being use to call a custom function in powerbi i.e. MyCustomFunction(@Name)
My question is how can I change the value of the parameter from the url when I load the report (after i published it to PowerBI service)?
i.e. I would like to
https://mypowerbi url?filter Name eq 'abc'
`MyCustomFunction('abc')' will be called.
https://mypowerbi url?filter Name eq 'def'
`MyCustomFunction('def')' will be called.
| How can I use url query parameter to set the value of a 'Parameters' in PowerBI? |
My question is how can I change the value of the parameter from the url when I load the report (after i published it to PowerBI service)?
You can't. In Power BI reports, the URL query parameters can be used to filter the report. See Filter a report using query string parameters in the URL
Parameters can be set from the URL in paginated reports only (legacy from SSRS). See Pass a report parameter within a URL for a Power BI paginated report.
For Power BI reports published to Power BI Service, the parameter values can be changed from the dataset's settings. See Edit parameter settings in the Power BI service. Keep in mind that this affects all users, who see the report, and it is not something that can affect your viewing session only. Changing parameter values of imported datasets usually will take a dataset refresh for the changes to take effect.
Parameters of published reports can also be updated using the Power BI's REST API - see Update Parameters and Update Parameters In Group.
|
76387183 | 76387423 | I am trying to make a simple navbar I managed to make the base but when I try to personalize I am getting stuck.
My objective is that when the mouse goes over each element(home, etc) it highlights like a box, but currently it only highlights the <a> I tried and not the <li> holding it.
I'm trying to make the <li> an anchor that can be clicked and highlighted similar to stack's navbar.
.nav-top {
background: rgb(151, 138, 63);
margin: 0;
padding: 1rem 0;
display: flex;
justify-content: flex-end;
align-items: center;
}
.nav-item {
list-style: none;
margin-right: 1.2rem;
padding: 5px 10px;
}
.nav-top ul {
margin: 0 auto;
padding: 0;
list-style: none
}
.nav-top li:hover {
display: block
}
.nav-top a:hover {
background: #F2F2F2;
color: #444444;
}
.nav-item a {
text-decoration: none;
color: white;
width: 100px;
}
.nav-item:first-child {
margin-right: auto;
margin-left: 2rem;
}
.nav-item a:hover {
color: aqua;
}
<navbar>
<ul class="nav-top">
<li class="nav-item"><label class="logo">LoremX</label></li>
<li class="nav-item"><a href="index.html">Home</a></li>
<li class="nav-item"><a href="#">About</a></li>
<li class="nav-item" id="contact"><a href="#">Contact</a></li>
</ul>
</navbar>
| Need help personalizing my CSS navbar: how do I highlight the element on mouseover? | The only :hover declaration you have for li is the default value of display: block while the color change declarations are made only for a. However, the effect that I believe you are trying to achieve is better accomplished by making the anchors block-level with padding.
Not related to the hover effect, just correcting your markup:
You have .nav-top ul selector but the example doesn't include a nested ul
I suspect that you are misusing the label element.
navbar is not an HTML element and I suspect you want nav
.nav-top {
display: flex;
margin: 0;
padding: 0;
list-style-type: none;
background: rgb(151, 138, 63);
}
.logo {
margin-right: auto;
margin-left: 2rem;
padding: 5px 10px;
align-self: center;
}
.nav-item {
margin-right: 1.2rem;
padding: 5px 10px;
}
.nav-item a {
display: block;
padding: .5em;
text-decoration: none;
color: white;
}
.nav-item a:hover {
color: aqua;
background: #F2F2F2;
}
<nav>
<ul class="nav-top">
<li class="logo">LoremX</li>
<li class="nav-item"><a href="#">Home</a></li>
<li class="nav-item"><a href="#">About</a></li>
<li class="nav-item" id="contact"><a href="#">Contact</a></li>
</ul>
</nav>
|
76383862 | 76388595 | I have a Dockerfile which I later want to feed some initial sql dump into:
FROM mysql:debian
EXPOSE 3306
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mysqld"]
Then there is my docker-compose.yaml:
version: '3'
services:
mysql:
container_name: ${CONTAINER_NAME}
build:
context: .
restart: always
environment:
- MYSQL_ROOT_PASSWORD=${ROOT_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
I can docker exec -it sqlcontainer mysql -uroot -p into my container just fine (works with the $MYSQL_USER as well), but when I try to do the same from outside, e.g. with mysql -uroot -p, I always get ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES).
I'm out of my wits. What am I doing wrong? The communication to mysqld inside the container apparently works, otherwise I wouldn't get back any response.
I'm on Windows, if that's a hint for anyone. But the above is true for PowerShell, CMD and even Git Bash.
| Mysql in docker container doesn't accept credentials | It is embarrassing to admit, but both commenters @RiggsFolly and @DavidMaze are right in what they said: I hadn't used MySQL for quite a while on my machine. When I checked the task manager, I indeed found a local mysqld running already, which of course was primed with different credentials. After killing the process (and adding the missing port in the compose file), I was able to connect to my containerized instance.
Hope this will help out somebody else in the future who falls into the same trap.
|